CN112489052A - Line structure light central line extraction method under complex environment - Google Patents
Line structure light central line extraction method under complex environment Download PDFInfo
- Publication number
- CN112489052A CN112489052A CN202011334449.3A CN202011334449A CN112489052A CN 112489052 A CN112489052 A CN 112489052A CN 202011334449 A CN202011334449 A CN 202011334449A CN 112489052 A CN112489052 A CN 112489052A
- Authority
- CN
- China
- Prior art keywords
- image
- light bar
- pixel
- point
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000009499 grossing Methods 0.000 claims abstract description 25
- 238000005520 cutting process Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000001914 filtration Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 12
- 238000007670 refining Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 7
- 230000003247 decreasing effect Effects 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 4
- 238000012217 deletion Methods 0.000 claims description 3
- 230000037430 deletion Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims 2
- 238000009826 distribution Methods 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 description 7
- 238000005286 illumination Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for extracting a line structured light strip center line in a complex environment, which comprises the following steps: preprocessing the collected light bar image, including image denoising, RGB component extraction and image binarization; cutting a region of interest (ROI) of the preprocessed light bar image; performing distance transformation on the cut image to obtain a light bar rough extraction image; thinning the light bar image by adopting a Pavlidis thinning algorithm to obtain a light bar central line; and smoothing the bulges and burrs of the central line of the structured light striation by adopting a self-adaptive smoothing algorithm to obtain an accurate striation center extraction image. The invention has the beneficial effects that: the method can well extract the central line of the light strip in different interference environments, and can solve the problem of uneven gray scale and width distribution of the line structured light strip.
Description
Technical Field
The invention belongs to the technical field of machine vision, and can be used for extracting the central line of a line structured light strip in a complex environment.
Background
In recent years, with the rapid development of machine vision, a vision-based measurement technology has occupied an important position in the field of three-dimensional measurement, and a linear structured light three-dimensional measurement technology is widely applied to industrial production with the characteristics of higher precision, strong real-time performance, simple structure, large detection range and the like. The most important part of the three-dimensional measurement of the object by adopting the line structured light technology is the extraction of the central line of the light strip of the image, and the extraction precision of the central line of the light strip directly influences the final measurement precision of the system. Due to the influence of different texture of the material of the object to be measured, external illumination, background environment and other factors, the obtained light strip pixels are not uniformly distributed; in order to meet the practical industrial application, the requirement on the extraction speed is high, so that the accuracy and the efficiency of the center line of the light bar are key.
The Steger algorithm in the traditional light bar extraction method is a more classical light bar extraction algorithm with good extraction effect, but a large amount of convolution is needed, so that the operation amount is large, the operation time is long, and the real-time extraction of light bars is difficult to realize; although the directional template method can repair small broken lines, the extraction effect is limited by the limited direction of the template, and the object plane with complex texture can cause the light bars to deviate to more directions; the gray scale centroid method is susceptible to the environment and the light bars are greatly deformed when the curvature of the light bars changes greatly. Although the light strip center extraction algorithm based on the combination of the Hessian matrix and the region growing algorithm in the improved algorithm overcomes the influence of noise on the extraction of the central point, the Hessian matrix is relied on to obtain the normal direction, large-scale Gaussian convolution is needed, and the calculation amount is large; the light strip extraction algorithm based on principal component analysis introduces a principal component analysis algorithm to determine the normal direction of an image, and can be completed only by 2 times of Gaussian convolution, so that the calculation speed is improved, but the accurate extraction of structured light is difficult to realize under a complex background; although the light bar extraction algorithm based on the BP neural network overcomes the defects of the gray scale gravity center method and the Steger algorithm, the network structure and training sample data have great influence on the extraction effect. Therefore, the light bar central line extraction algorithm with simple algorithm, high running speed, good environment adaptability and high extraction precision can meet the requirement of carrying out real-time three-dimensional measurement on objects in industry.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a line structured light stripe central line extraction method under a complex environment, which can overcome the problems that an object to be measured is influenced by external illumination, self material curvature and background environment when the object is measured, the collected light stripe image has local reflection and high noise, the extracted light stripe central line is unstable, and the like.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a line structured light strip central line extraction method under a complex environment comprises the following steps:
(1) preprocessing the collected light bar image, including image denoising, RGB component extraction and image binarization;
(2) cutting a region of interest (ROI) of the preprocessed light bar image;
(3) performing distance transformation on the cut image to obtain a light bar rough extraction image;
(4) refining the light bar image by adopting a Pavlidis refining algorithm to obtain an initial value of a light bar central line;
(5) and smoothing the bulges and burrs of the initial light strip central line by adopting a self-adaptive smoothing algorithm to obtain an accurate light strip central line extraction image.
Preferably, the pretreatment in step (1) comprises the following specific steps: the method comprises the steps of denoising an acquired image by median filtering, extracting RGB components of the image, and binarizing the image into a black-white image by a Kittler algorithm, wherein white is a foreground and black is a background.
Preferably, the specific process of performing interesting cropping on the light bar image in the step (2) is as follows: and cutting a rectangular area containing the foreground of the binary image to reduce the data calculation amount.
Preferably, the specific process of distance conversion for the light bar image in step (3) is as follows:
1) and dividing pixel points in the image into internal points, external points and isolated points. The four upper, lower, left and right fields of the center pixel of the inner point are 1, the four upper, lower, left and right fields of the center pixel of the isolated point are 0, otherwise, the rest points are defined as boundary points.
2) A connected region L is in the binary light bar image, and includes a target set P and a background set B, and the distance map is D, then the formula of the distance change is:
D(P)=Min(dis(p,q))p∈P,q∈B
in the formula: min (dis) represents the minimum distance between the target point P and the background set B
The Euclidean distance function dis () is:
3) all interior and non-interior points in the image are traversed, and the point sets are P1 and P2. For each interior point (x, y) in P1, its minimum distance in P2 is found using distance formula dis (), the minimum distance constituting the set P3. Obtaining the maximum value Max and the minimum value Min in the P3, and giving the Gray value Gray of each internal point according to the following formula;
Gray(x,y)=255×|P3(x,y)-Min|/|Max-Min|
in the formula: p3(x, y) represents the shortest distance of point (x, y) in P1 in P2
4) No action is taken by the outliers.
Preferably, the specific process for refining the light bar image Pavlidis in the step (4) is as follows:
1) for the light bar image to be processed, the gray value of the pixel of the black background is 0, and the gray value of the pixel of the foreground object to be thinned is the gray value of the pixel after distance conversion.
1) The foreground pixels are outlined and denoted by 2.
3) If the pixel is an isolated point or end point, its pixel gray value is labeled 3. If the pixel is other point that may change the connectivity of 8 neighbors, the current pixel grayscale value is marked as 3.
4) Pixels not equal to 0 are processed and marked as 4 if all the surrounding pixels are 2. For other undeletable cases, the current pixel grayscale value is set to 4.
5) The contour points with a pixel gray value of 2 are again determined, and the deletable points are marked with 5.
6) The deletion operation is performed on the points of which the pixel gray values are 2 and 5, and the point of which the final value is 4 is the thinned contour point.
Preferably, the specific process of adaptively smoothing the light bar image in the step (5) is as follows:
1) for input image f0(i, j) the adaptive smoothing filtering is to convolve the filtering function with the original image to obtain a smooth image f1(i, j), i.e.
Wherein p defines a (2p +1) × (2p +1) window with the filter region centered at point (i, j); omegaij(m, n) is the pixel point f in the window0(i + m, j + n) designed weighting coefficients.
2) Determining the value of the weight according to the difference value of the pixels in the filtering window, wherein the weighting function is a monotonously decreasing function, and selecting a monotonously decreasing exponential function phi (x) to construct the weighting function, wherein the weighting function omega isij(m, n) and an exponential function Φ (x) of
ωij(m,n)=Φ(|f(i,j)-f(i+m,j+n)|)
3) The weighting coefficients being calculated by Gaussian functions, i.e.
In the formula: sigma adjusts the exponential decay rate, which determines the edge amplitude that can be preserved during smoothing; f' (i, j) represents a gray gradient, having
|f′(i,j)|2=Gi 2+Gj 2
4) Let the filter window size be 3 × 3, then
The weight coefficient may be expressed as
5) For the weight omegaijNormalization is carried out to ensure f after smoothing1(i, j) does not exceed the grayscale range of the image.
The invention has the beneficial effects that: the method has the advantages that image noise is removed through preprocessing the light bar image, the light bar image can be effectively extracted through RGB component extraction of the image, the influence of illumination and background environment on light bar extraction is reduced, calculated amount can be reduced through region-of-interest cutting, the light bar image is roughly extracted through a distance transformation algorithm, the light bar image is further refined through a Pavlidis refinement algorithm, and the light bar central line with the single pixel width is obtained through self-adaptive smooth filtering aiming at the protrusions and the burrs. The method can realize the rapid and accurate extraction of the central line of the line structured light strip under the complex background and illumination conditions, and meet the requirements of the precision and the real-time performance of a visual detection system.
Drawings
FIG. 1 is a flow chart of a line structured light bar centerline extraction method as disclosed herein;
FIG. 2 is an original light bar image under natural light;
FIG. 3 is a gray scale distribution of a distance transformed light bar image;
FIG. 4 is a schematic diagram of type definition of interior points, exterior points, and outliers;
FIG. 5 is a light bar image after the Pavlidis thinning algorithm is thinned;
FIG. 6 is a schematic view of P0 at the eight domain points P1-P8;
fig. 7 is a light bar image smoothed by the adaptive smoothing algorithm.
Detailed Description
For the purpose of enhancing the understanding of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and examples, which are provided for the purpose of illustration only and are not intended to limit the scope of the present invention.
The present invention will now be described in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for extracting a line structured light centerline in a complex environment according to an embodiment of the present invention.
As shown in fig. 1, a method for extracting a centerline of a line structured light bar in a complex environment according to an embodiment of the present invention includes the following steps:
the specific process comprises the following steps:
(1) the image of the light bar shot by the camera is shown in fig. 2, but there are various noise signals, and these noises may have great influence on the extraction of the center of the light bar, such as problems of edge blurring, unsmooth and uneven brightness, and the median filtering may be performed to both remove the noise and protect the edges of the image.
(2) The parameters corresponding to the RGB components are set according to the actual situation of laser projection, so that the light bar part of the color image can be well extracted, the influence of the background on light bar processing is reduced, and the R component is extracted from the red laser image to obtain a gray image.
(3) The image is binarized by adopting a Kittler minimum error algorithm, and the principle is as follows: calculating the average value of the gradient gray scale of the whole image, taking the average value as a global threshold value T, wherein the calculation formula of T is as follows:
in the formula, f (x, y) represents an original grayscale image, and e (x, y) is max { | ex|,|eyDenotes the maximum value of the gradient of the image, in e (x, y), exF (x-1, y) -f (x +1, y) denotes a gradient in the horizontal direction, exF (x, y-1) -f (x, y +1) represents a gradient in the vertical direction.
Step 2, cutting the region of interest of the preprocessed light bar image;
the image processing speed is greatly influenced by directly processing the acquired image, in order to make the measurement more targeted and reduce the processing time, a specific area of the image needs to be processed, and the size of the specific area is optimal to completely contain the light strip image of the object, so that the information on the optical system stripe can be kept as much as possible while the data calculation amount is reduced.
Step 3, performing distance transformation on the cut image to obtain a light bar rough extraction image;
the principle of distance transformation of an image is: the binary image is converted into a gray image by a process of identifying the distance between a space point (a target point and a background point), in which the distance between each pixel and the nearest background is represented by the gray level of the pixel, and the gray distribution of the light bar image after the distance conversion is as shown in fig. 3.
The specific process comprises the following steps:
(1) the pixel points in the image are divided into inner points, outer points, and isolated points, as shown in fig. 4. The four upper, lower, left and right fields of the center pixel of the inner point are 1, the four upper, lower, left and right fields of the center pixel of the isolated point are 0, otherwise, the rest points are defined as boundary points.
(2) A connected region L in the binary light bar image includes the target set P and the background set B, and the distance map is D, then the formula of the distance change is:
D(P)=Min(dis(p,q))p∈P,q∈B
in the formula: min (dis) represents the minimum distance between the target point P and the background set B
(3) The Euclidean distance function dis () is:
(4) all interior and non-interior points in the image are traversed, and the point sets are P1 and P2. For each interior point (x, y) in P1, its minimum distance in P2 is found using distance formula dis (), the minimum distance constituting the set P3. Obtaining the maximum value Max and the minimum value Min in the P3, and giving the Gray value Gray of each internal point according to the following formula;
Gray(x,y)=255×|P3(x,y)-Min|/|Max-Min|
in the formula: p3(x, y) represents the shortest distance of point (x, y) in P1 in P2
Step 4, thinning the light bar gray image by adopting a Pavlidis thinning algorithm to obtain an initial value of a light bar central line;
the principle of refining the light bar image by adopting the Pavlidis algorithm is as follows: the light bar image is peeled layer by layer, some points are removed from the original image, but the original shape is still kept until a single-pixel-wide framework is obtained, namely the initial value of the center of the light bar, as shown in fig. 5.
The specific process comprises the following steps:
(1) for the light bar image to be processed, the gray value of the pixel of the black background is 0, the gray value of the pixel of the foreground object to be thinned is the gray value of the pixel after distance transformation, and fig. 6 is a schematic diagram of eight domain points P1-P8 of point P0.
(2) The foreground pixels are outlined and denoted by 2. For a pixel with a value of 1, if four points of the fields P1, P3, P5 and P7 are all 1, the point is an inner point, and the loop is continued to judge other pixels.
(3) If the pixel is an isolated point or end point, its pixel gray value is labeled 3.
(4) If the pixel is the other point that may change 8 connectivity. Such as the following: p4 and P8 are 0, but P5, P6, P7 and P1, P2 and P3 have non-zero values, and if P points are deleted, connectivity is changed. The current pixel gray value is marked as 3 at this time.
(5) Pixels not equal to 0 are processed and marked as 4 (not deleted) if all the pixels are 2 around it. And for other non-deletable cases, setting the gray value of the current pixel to be 4.
(6) The contour points with a pixel gray value of 2 are again determined, and the deletable points are marked with 5.
(7) The deletion operation is performed for the points whose pixel gradation values are 2 and 5, and the point whose final value is 4 is the thinned contour point.
And 5, smoothing the bulges and burrs of the initial light strip center line by adopting a self-adaptive smoothing algorithm to obtain an accurate light strip center line extraction image.
The principle of the self-adaptive smoothing of the light bar image is as follows: according to the abrupt change characteristic of the gray value of the pixels in the centers of the light bars, the weight of the filter is adaptively changed, the edge of the image is sharpened in the process of smoothing the area, and the precise centers of the light bars are obtained, as shown in fig. 7.
The specific process comprises the following steps:
(1) for input image f0(i, j) the adaptive smoothing filtering is to convolve the filtering function with the original image to obtain a smooth image f1(i, j), i.e.
Wherein p defines a (2p +1) × (2p +1) window with the filter region centered at point (i, j); omegaij(m, n) is the pixel point f in the window0(i + m, j + n) designed weighting coefficients.
(2) Determining the value of the weight according to the difference value of the pixels in the filtering window, wherein the weighting function is a monotonously decreasing function, and selecting a monotonously decreasing exponential function phi (x) to construct the weighting function, wherein the weighting function omega isij(m, n) and an exponential function Φ (x) of
ωij(m,n)=Φ(|f(i,j)-f(i+m,j+n)|)
(3) The weighting coefficients being calculated by Gaussian functions, i.e.
In the formula: sigma adjusts the exponential decay rate, which determines the edge amplitude that can be preserved during smoothing; f' (i, j) represents a gray gradient, having
|f′(i,j)|2=Gi 2+Gj 2
(4) Let the filter window size be 3 × 3, then
(5) The weight coefficient may be expressed as
For the weight omegaijNormalization is carried out to ensure f after smoothing1(i, j) does not exceed the grayscale range of the image.
The invention mainly solves the problems that the extraction of the light strip center of the structured light is easily influenced by background and illumination, and the extraction precision and the extraction speed cannot be simultaneously considered. The method comprises the steps of extracting RGB components of an image to effectively extract a light bar image, preprocessing the light bar image and cutting a region of interest to remove noise and reduce calculated amount, roughly extracting the light bar image by adopting a distance transformation algorithm, further refining the light bar image by adopting a Pavlidis refining algorithm, and obtaining a light bar central line with a single pixel width by adopting self-adaptive smooth filtering aiming at bulges and burrs. The method can realize the rapid and accurate extraction of the central line of the line structured light strip under the complex background and illumination conditions, and meet the requirements of a visual detection system.
Claims (10)
1. A line structured light strip center line extraction method under a complex environment is characterized by comprising the following steps:
s1, preprocessing the collected light bar image, including image denoising, RGB component extraction and image binarization;
s2, cutting the region of interest of the preprocessed light bar image;
s3, performing distance transformation on the cut image to obtain a light bar rough extraction image;
s4, refining the light bar image by adopting a Pavlidis refining algorithm to obtain an initial value of a light bar central line;
and S5, smoothing the bulges and burrs of the initial light strip center line by adopting a self-adaptive smoothing algorithm to obtain an accurate light strip center line extraction image.
2. The method as claimed in claim 1, wherein the step S1 of preprocessing the light bar image comprises: and denoising the acquired image by adopting median filtering, extracting RGB components, and binarizing into a black-and-white image, wherein white is a foreground and black is a background.
3. The method for extracting the centerline of line structured light bar in complex environment according to claim 2, wherein said step S1 is performed by using Kittler algorithm to binarize the image.
4. The method of claim 1, wherein the interesting clipping of the light bar image in the step S2 comprises: and cutting a rectangular area containing the foreground of the binary image.
5. The method as claimed in claim 1, wherein the step S3 of transforming the distance of the light bar image comprises: the binary image is converted into a gray image by a process of identifying the distance between the target point and the background point, and the distance between each pixel and the background closest to the pixel in the gray image is represented by the gray level of the pixel.
6. The method for extracting centerline of line structured light bar in complex environment according to claim 1, wherein the step S4 of refining the light bar image Pavlidis comprises: the light bar image is peeled layer by layer, some points are removed from the original image, but the original shape is still kept until a single-pixel-width framework is obtained, namely the initial value of the center of the light bar.
7. The method as claimed in claim 1, wherein the step S5 of adaptively smoothing the light stripe image comprises: according to the abrupt change characteristic of the gray value of the pixels in the centers of the light bars, the weight of the filter is changed in a self-adaptive mode, the edge of the image is sharpened in the process of smoothing the area, and the precise centers of the light bars are obtained.
8. The method for extracting the centerline of line structured light bar under complex environment according to claim 1, wherein the step S3 of distance transforming the cropped image specifically comprises the following steps:
s3.1, dividing pixel points in the image into inner points, outer points and isolated points, wherein the upper, lower, left and right fields of central pixels of the inner points are 1, the upper, lower, left and right fields of the central pixels of the isolated points are 0, and otherwise, defining the rest points as boundary points;
s3.2, a connected region L is arranged in the binary light bar image, the connected region L comprises a target set P and a background set B, the distance map is D, and the formula of the distance change is as follows:
D(P)=Min(dis(p,q))p∈P,q∈B
in the formula: min (dis) represents the minimum distance of the target point P from the background set B,
the Euclidean distance function dis () is:
s3.3, traversing all internal points and non-internal points in the image, wherein the point sets are P1 and P2, and for each internal point (x, y) in P1, calculating the minimum distance of the internal point in P2 by using a distance formula dis (), wherein the minimum distance forms a set P3; obtaining the maximum value Max and the minimum value Min in the P3, and giving the Gray value Gray of each internal point according to the following formula;
Gray(x,y)=255×|P3(x,y)-Min|/|Max-Min|
in the formula: p3(x, y) represents the shortest distance of point (x, y) in P1 in P2;
and S3.4, no operation is taken for the isolated points, the binary image is changed into a 32-bit gray-scale image with a floating point type after distance conversion, and normalization operation is needed for convenient display.
9. The line structured light bar centerline extraction method under the complex environment as claimed in claim 1, wherein the step S4 of refining the light bar grayscale image by the Pavlidis refinement algorithm specifically includes the following steps:
s4.1, for the light bar image to be processed, the gray value of a black background pixel is 0, and the gray value of a foreground object pixel to be thinned is the gray value of the pixel after distance conversion;
s4.2, solving the contour of the foreground pixel and expressing the contour by 2; for a pixel point with a pixel gray value of 1, if four points of the fields P1, P3, P5 and P7 are all 1, the point is an internal point, and the circulation is continued to judge other pixels;
s4.3, if the pixel is an isolated point or an end point, the pixel gray value is marked as 3;
s4.4, processing the pixels which are not equal to 0, and marking the pixels as 4 if the periphery of the pixels is all 2; for other conditions that the image cannot be deleted, setting the gray value of the current pixel to be 4;
s4.5, judging the contour points with the pixel gray value of 2 again, and marking the deletable points as 5;
and S4.6, performing deletion operation on the points with the pixel gray values of 2 and 5, wherein the point with the final pixel gray value of 4 is the thinned contour point.
10. The method as claimed in claim 1, wherein the step S5 of smoothing the initial light stripe centerline by using an adaptive smoothing algorithm includes the following steps:
s5.1, for input image f0(i, j) the adaptive smoothing filtering is to convolve the filtering function with the original image to obtain a smooth image f1(i, j), i.e.
Wherein p defines a (2p +1) × (2p +1) window with the filter region centered at point (i, j); omegaij(m, n) is the pixel point f in the window0(i + m, j + n) designed weighting coefficients;
s5.2, determining the value of the weight according to the difference value of the pixels in the filtering window, wherein the weighting function is a monotonous decreasing function, and selecting and using an exponential function phi (x) which is monotonous decreasing to construct the weighting function, and the weighting function omega isij(m, n) and an exponential function Φ (x) of
ωij(m,n)=Φ(|f(i,j)-f(i+m,j+n)|)
S5.3, calculating the weighting coefficients by using Gaussian functions, i.e.
In the formula: sigma adjusts the exponential decay rate, which determines the edge amplitude that can be preserved during smoothing; f' (i, j) represents a gray gradient, having
|f′(i,j)|2=Gi 2+Gj 2;
S5.4, making the size of the filter window be 3 x 3, then
The weight coefficient may be expressed as
S5.4 pairs of weights omegaijNormalization is carried out to ensure f after smoothing1(i, j) does not exceed the grayscale range of the image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011334449.3A CN112489052A (en) | 2020-11-24 | 2020-11-24 | Line structure light central line extraction method under complex environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011334449.3A CN112489052A (en) | 2020-11-24 | 2020-11-24 | Line structure light central line extraction method under complex environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112489052A true CN112489052A (en) | 2021-03-12 |
Family
ID=74933983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011334449.3A Pending CN112489052A (en) | 2020-11-24 | 2020-11-24 | Line structure light central line extraction method under complex environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489052A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953459A (en) * | 2023-03-10 | 2023-04-11 | 齐鲁工业大学(山东省科学院) | Method for extracting laser stripe center line under complex illumination condition |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056118A (en) * | 2016-06-12 | 2016-10-26 | 合肥工业大学 | Recognition and counting method for cells |
CN106780637A (en) * | 2016-12-07 | 2017-05-31 | 中国石油大学(华东) | A kind of fast parallel image thinning algorithm based on pulse nerve membranous system |
CN107507215A (en) * | 2017-08-07 | 2017-12-22 | 广东电网有限责任公司珠海供电局 | A kind of power equipment infrared chart dividing method based on adaptive quantizing enhancing |
CN107590347A (en) * | 2017-09-22 | 2018-01-16 | 武汉德友科技有限公司 | One kind is based on the identification of matching isolated point and delet method and the system of designing a model |
CN110866924A (en) * | 2019-09-24 | 2020-03-06 | 重庆邮电大学 | Line structured light center line extraction method and storage medium |
CN111145161A (en) * | 2019-12-28 | 2020-05-12 | 北京工业大学 | Method for processing and identifying pavement crack digital image |
CN111383293A (en) * | 2020-02-26 | 2020-07-07 | 北京京东叁佰陆拾度电子商务有限公司 | Image element vectorization method and device |
-
2020
- 2020-11-24 CN CN202011334449.3A patent/CN112489052A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056118A (en) * | 2016-06-12 | 2016-10-26 | 合肥工业大学 | Recognition and counting method for cells |
CN106780637A (en) * | 2016-12-07 | 2017-05-31 | 中国石油大学(华东) | A kind of fast parallel image thinning algorithm based on pulse nerve membranous system |
CN107507215A (en) * | 2017-08-07 | 2017-12-22 | 广东电网有限责任公司珠海供电局 | A kind of power equipment infrared chart dividing method based on adaptive quantizing enhancing |
CN107590347A (en) * | 2017-09-22 | 2018-01-16 | 武汉德友科技有限公司 | One kind is based on the identification of matching isolated point and delet method and the system of designing a model |
CN110866924A (en) * | 2019-09-24 | 2020-03-06 | 重庆邮电大学 | Line structured light center line extraction method and storage medium |
CN111145161A (en) * | 2019-12-28 | 2020-05-12 | 北京工业大学 | Method for processing and identifying pavement crack digital image |
CN111383293A (en) * | 2020-02-26 | 2020-07-07 | 北京京东叁佰陆拾度电子商务有限公司 | Image element vectorization method and device |
Non-Patent Citations (7)
Title |
---|
GUAN XUA等: "Timed evaluation of the center extraction of a moving laserstripe on a vehicle body using the Sigmoid-Gaussian functionand a tracking method", OPTIK, vol. 130, pages 1454 - 1461 * |
任保刚, 贾海波, 芮杰: "图像平滑与边缘检测的迭代算法", 测绘学院学报, no. 03, 30 September 2005 (2005-09-30), pages 178 - 180 * |
刘卫光,李娟: "几种细化算法的比较研究", 科技风, pages 1 * |
景晓军, 李剑峰, 熊玉庆: "静止图像的一种自适应平滑滤波算法", 通信学报, no. 10, 25 October 2002 (2002-10-25), pages 7 - 14 * |
陈娟;陈乾辉;师路欢;吴建军;: "图像跟踪中的边缘检测技术", 中国光学与应用光学, no. 01, 15 February 2009 (2009-02-15), pages 46 - 53 * |
顾益兰,李锋: "基于线结构光的光条中心亚像素提取研究", 电子设计工程, vol. 25, no. 21, pages 148 - 151 * |
鲍茜,李锋: "Hessian 矩阵结合梯度方差的光条纹中心提取方法", 计算机与数字工程, vol. 48, no. 8, pages 2018 - 2023 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953459A (en) * | 2023-03-10 | 2023-04-11 | 齐鲁工业大学(山东省科学院) | Method for extracting laser stripe center line under complex illumination condition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111415363B (en) | Image edge identification method | |
CN110866924B (en) | Line structured light center line extraction method and storage medium | |
CN109242791B (en) | Batch repair method for damaged plant leaves | |
CN107680054B (en) | Multi-source image fusion method in haze environment | |
CN110286124B (en) | Machine vision-based refractory brick measuring system | |
CN107862667B (en) | Urban shadow detection and removal method based on high-resolution remote sensing image | |
CN110232389B (en) | Stereoscopic vision navigation method based on invariance of green crop feature extraction | |
CN114219805B (en) | Intelligent detection method for glass defects | |
CN116597392B (en) | Hydraulic oil impurity identification method based on machine vision | |
CN109978848B (en) | Method for detecting hard exudation in fundus image based on multi-light-source color constancy model | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN112308872B (en) | Image edge detection method based on multi-scale Gabor first derivative | |
CN112991283A (en) | Flexible IC substrate line width detection method based on super-pixels, medium and equipment | |
CN111354047A (en) | Camera module positioning method and system based on computer vision | |
CN109544513A (en) | A kind of steel pipe end surface defect extraction knowledge method for distinguishing | |
CN104915951B (en) | A kind of stippled formula DPM two-dimension code area localization methods | |
CN112489052A (en) | Line structure light central line extraction method under complex environment | |
Chen et al. | Image segmentation based on mathematical morphological operator | |
CN112102189B (en) | Line structure light bar center line extraction method | |
CN117746165A (en) | Method and device for identifying tire types of wheel type excavator | |
CN113284158B (en) | Image edge extraction method and system based on structural constraint clustering | |
CN111882529B (en) | Mura defect detection method and device for display screen | |
CN115187790A (en) | Image contour extraction method based on reference region binarization result | |
Khan et al. | Segmentation of single and overlapping leaves by extracting appropriate contours | |
CN113505811A (en) | Machine vision imaging method for hub production |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |