CN116433700A - Visual positioning method for flange part contour - Google Patents
Visual positioning method for flange part contour Download PDFInfo
- Publication number
- CN116433700A CN116433700A CN202310693060.5A CN202310693060A CN116433700A CN 116433700 A CN116433700 A CN 116433700A CN 202310693060 A CN202310693060 A CN 202310693060A CN 116433700 A CN116433700 A CN 116433700A
- Authority
- CN
- China
- Prior art keywords
- flange part
- convolution kernel
- determining
- value
- contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000007 visual effect Effects 0.000 title claims abstract description 15
- 230000004048 modification Effects 0.000 claims abstract description 26
- 238000012986 modification Methods 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 16
- 238000004458 analytical method Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a flange part contour visual positioning method, which relates to the field of image processing, and comprises the following steps: acquiring an image of the flange part, and converting the image into a gray image; determining a convolution kernel size based on the grayscale image; determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel; and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part. The scheme can obtain complete contour information.
Description
Technical Field
The invention relates to the field of image processing, in particular to a flange part contour visual positioning method.
Background
The visual contour positioning of flange parts is one of the important tasks in manufacturing, assembling, and inspecting some mechanical parts. The flanges can be classified according to the outline, and the flanges can be specifically classified into: right-angle flanges, circular flanges, tapered flanges, annular flanges, etc. The flange contour is generally initially extracted through edge detection, a binary image only containing contour information is obtained through post-processing, and basic contour information is obtained through analysis of the binary image. The contour information includes contour position, shape, and the like. The sobel operator is generally used for selecting the edge detection operator, and is more suitable for a sobel operator with simpler and clearer calculation method because the flange image is simple in color structure.
In the process of extracting the flange profile by using a sobel operator, due to the limitation of the view angle of the acquisition camera, the edge part of the obtained image after operator detection is incomplete, and the profile information of the obtained flange part is incomplete when the subsequent post-treatment is carried out.
Disclosure of Invention
The invention provides a flange part contour visual positioning method which can obtain complete contour information.
The application provides a flange part contour visual positioning method, which comprises the following steps:
acquiring an image of the flange part, and converting the image into a gray image;
determining a convolution kernel size based on the grayscale image;
determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel;
and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part.
In an alternative embodiment, determining a convolution kernel size based on the grayscale image includes:
detecting gray values of pixel points in a window in a sliding process by using two windows sliding in parallel in the gray image, and further determining abrupt pixel points, wherein the abrupt pixel points are pixel points corresponding to preset values when the gray values jump from 0 to the preset values;
connecting the abrupt pixel points, and determining a normal line from a perpendicular line of the connection line;
determining a localized area of the flange part based on the normal;
the convolution kernel size is determined based on the local region.
In an alternative embodiment, determining the convolution kernel size based on the local region includes:
calculating entropy of the local area based on probability of occurrence of different gray values in the local area;
determining a first pixel point when the entropy of the local area changes from 0 to not 0, and determining a second pixel point when the entropy of the local area changes from not 0 to 0;
and determining the distance between the first pixel point and the second pixel point, wherein the distance is the convolution kernel size.
In an alternative embodiment, calculating entropy of the local region based on probabilities of occurrence of different gray values in the local region includes:
the entropy of the local region is calculated using the following equation (1):
wherein ,representing different local areas +.>Entropy of (2); />Representing the probability of occurrence of different gray values in the local area;representing the number of different gray values.
In an alternative embodiment, determining the weight modification value based on the distribution characteristics of the pixel points within the convolution kernel includes:
judging the necessity of marking the central pixel point in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixel points in the convolution kernel;
if the marking necessity is in the first preset range, not marking, and if the marking necessity is in the second preset range, marking;
the marked pixel points are connected in pairs, and the shape of the flange part is determined;
the weight modification value is determined based on the shape.
In an alternative embodiment, the method for connecting the marked pixel points in pairs to determine the shape of the flange part comprises the following steps:
connecting the marked pixel points in pairs to obtain a connecting line;
determining a normal line of a connecting line, the normal line passing through a midpoint of the connecting line;
if all normals intersect at a point, the flange part is represented as a circular profile; if all normals are parallel to each other, it means that the flange part has a rectangular profile.
In an alternative embodiment, determining the weight modification value based on the shape includes:
if the flange part is of a circular outline, the weight in the direction of the connecting line composition is increased, the weight in the other directions is reduced, and then the weight modification value is determined.
In an alternative embodiment, the processing the flange part by using the optimized sobel operator to obtain the profile information corresponding to the flange part includes:
processing the flange part by using the optimized sobel operator to obtain a gradient output value;
and determining contour information corresponding to the flange part based on the gradient output value.
In an alternative embodiment, the processing the flange part by using the optimized sobel operator to obtain a gradient output value includes:
obtaining a gradient output value by using the following formula (2):
wherein ,expressing the gradient output value of the sobel operator, < +.>Respectively indicate->Components in 4 directions; />Respectively 4 directionsWeight modification value, in the initial sobel operator, +.>N represents the number of pixel points marked in the connecting line composition direction, and in the optimized sobel operator,。
in an alternative embodiment, determining the necessity of marking the center pixel in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixels in the convolution kernel includes:
the necessity of marking the center pixel point in the convolution kernel is calculated using the following equation (3):
wherein ,indicating the necessity of marking the central pixel point in the convolution kernel,/-, for example>Pixels representing different gray values within the convolution kernel; />Representing the probability of occurrence of pixels of different gray values in the convolution kernel, +.>Respectively representing the maximum value and the minimum value of the gray value of the pixel point in the convolution kernel.
The beneficial effects of the invention are as follows: compared with the prior art, the flange part contour visual positioning method provided by the invention comprises the following steps: acquiring an image of the flange part, and converting the image into a gray image; determining a convolution kernel size based on the grayscale image; determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel; and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part. The scheme can obtain complete contour information.
Drawings
FIG. 1 is a flow chart of a first embodiment of the flange part contour visual positioning method of the present invention;
FIG. 2 is a flow chart of an embodiment of step S12 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of step S13 in FIG. 1;
FIG. 4 is a schematic diagram of a connection line of marked pixels.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The present invention will be described in detail with reference to the accompanying drawings and examples.
Referring to fig. 1, a flow chart of an embodiment of a method for visually positioning a flange part contour according to the present application specifically includes:
step S11: and acquiring an image of the flange part, and converting the image into a gray image.
Specifically, images of flange parts are collected, in order to increase contrast between pixel points and facilitate improvement of subsequent operators, the collected images are subjected to equalization processing after graying, gray images are obtained, and subsequent processing is performed on the basis of the gray images for unfolding analysis.
Step S12: a convolution kernel size is determined based on the grayscale image.
Preliminary analysis is carried out on the preprocessed image to obtain priori features at the flange edge, and the purpose of obtaining the priori features is to obtain the width of the flange edge, so that the calculated convolution kernel can cover the edge, and general sobel operator convolution can be carried outThe core size isTo prevent the local area edge width from being larger than +.>The convolution kernel size needs to be determined here so that the convolution kernel wraps around the edges. The purpose of the convolution kernel wrapping the edge is to enable the pixel points in the convolution kernel to completely reflect the edge part in the process of operator operation, so that whether the distribution of the gray values of the pixel points in the convolution kernel meets the rule of the distribution of the pixel points at the flange edge is judged according to the prior characteristics. A priori features mean that the difference in gray values at the flange edge and at the background is large and the edge has a width, within a given width, the gray variation of which is gradual. The primary width of the edge is obtained by quantifying the gray level difference and the progressive gray level change, the edge obtained here is a blurry edge, and the main purpose is to determine the width of the edge, so that the convolution kernel is wide in size, the edge can be completely covered, and the unfolding analysis of subsequent steps is convenient.
Specifically, referring to fig. 2, step S12 specifically includes:
step S21: and detecting gray values of pixel points in the window in a sliding process by using two windows sliding in parallel in the gray image, and further determining abrupt pixel points, wherein the abrupt pixel points are pixel points corresponding to preset values when the gray values jump from 0 to the preset values.
Specifically, in the gray level image, the background gray level value is 0, two sliding windows sliding in parallel are taken, so that the gray level value of a pixel point in the window is detected in the sliding process, and if the gray level value in the window jumps from 0 to a preset value, the pixel point corresponding to the preset value when the gray level value in the window jumps from 0 to the preset value is determined as a sudden change pixel point. That is, the first pixel point, which jumps 0 to a preset value, is determined as the abrupt pixel point.
Step S22: and connecting the abrupt pixel points, and determining a normal line by a perpendicular line of the connecting line.
And connecting abrupt pixel points, wherein the vertical line of the connecting line is the normal line of the area with larger gray value difference.
Step S23: a localized region of the flange part is determined based on the normal.
After obtaining the normal direction, prescribing the direction consistent with the normal directionThe area is a local area formed by the normal line, namely a local area of the flange part. Specifically, the middle pixel point in the local area passes through the normal line.
Step S24: the convolution kernel size is determined based on the local region.
Specifically, the entropy of the local region is calculated based on the probability of occurrence of different gray values in the local region. In one embodiment, the entropy of the local region is calculated using the following equation (1):
wherein ,representing different local areas +.>Entropy of (2); />Representing the probability of occurrence of different gray values in the local area;representing the number of different gray values.
When the window is in the background portion,when the window is in the foreground portion but not in the edge portion, itThe method comprises the steps of carrying out a first treatment on the surface of the When the window is in the edge region, it is +.>. Based on this, the entropy of the local area is determined>A first pixel point when changing from 0 to not 0, and determining entropy of the local area>A second pixel point when it is changed from 0 to 0. Determining the distance +.>Said distance->For the convolution kernel size, i.e. the size of the convolution kernel is +.>. In one embodiment, n has a value of 5.
Step S13: a weight modification value is determined based on the distribution characteristics of the pixels within the convolution kernel.
The step S12 obtains the size of the convolution kernel, which may already include the width of the basic edge, so in step S13, the distribution characteristics of the pixels in the convolution kernel may be directly analyzed, and whether the distribution characteristics satisfy the prior rule of the edge pixel distribution is analyzed.
In one embodiment, referring to fig. 3, step S13 specifically includes:
step S31: and judging the necessity of marking the central pixel point in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixel points in the convolution kernel.
The purpose of determining the marking necessity of the central pixel point in the convolution kernel is to divide the points with the distribution characteristics in the kernel meeting the prior characteristics to a high necessity, so that the points are considered as points meeting the edge rule, the series of points are connected, the basic distribution form of the points in the airspace is quantized, and the points meeting the edge rule is further determined.
For the points meeting the rules, as the edge deletion can be caused by the view angle problem in a certain section, gradient weight division is needed, and the gradient duty ratio at the edge is amplified, so that the problem of the edge deletion can be solved in the final output result, and a good edge detection effect is achieved.
In one embodiment, the necessity of marking the center pixel point in the convolution kernel is calculated using the following equation (3):
wherein ,indicating the necessity of marking the central pixel point in the convolution kernel,/-, for example>Pixels representing different gray values within the convolution kernel; />Representing the probability of occurrence of pixels of different gray values in the convolution kernel, +.>Respectively representing the maximum value and the minimum value of the gray value of the pixel point in the convolution kernel.
Step S32: if the marking necessity is within a first preset range, the marking is not performed, and if the marking necessity is within a second preset range, the marking is performed.
It will be appreciated that the greater the entropy within the convolution kernel, the greater the marking necessity, and therefore proportional, since at the edges the distribution of pixel gray values is more discrete compared to the foreground or background. Wherein the entropy is in the range ofThe method comprises the steps of carrying out a first treatment on the surface of the The larger the difference between the maximum and minimum values, the greater its marking necessity. Because at the edge, the gray value distribution of the pixel point is relatively highDiscrete, the gray value at the edge is generally lower than the foreground and generally higher than the background; mean->The larger the label the smaller the need for the label, since at the foreground its gray value average is significantly higher than at the edge (there are low gray value points).
In particular, the necessity of markingThe closer to 1, the greater the marking necessity. Thus, in an embodiment, if the marking necessity is within a first preset range, no marking is performed, and if the marking necessity is within a second preset range, marking is performed. It should be noted that the second preset range is closer to 1. In one embodiment, the first predetermined range is [0,0.8 ]]The second preset range is (0.8,1)]. I.e. < ->When not marked, if +.>At that time, the marking is performed. The purpose of marking the pixel points is to analyze the basic distribution form or the distribution rule of all the marked points in the airspace, and judge whether the basic structure of the flange profile, such as a circle, a rectangle and the like, is satisfied. If the basic structure is satisfied, the gradient weight of the basic structure needs to be reconstructed, and the aim of the reconstruction is to make the output result of the sobel operator after the reconstruction of the incomplete edge part complement the incomplete edge part, so that the aim of supplementing the incomplete edge is achieved.
Step S33: and connecting the marked pixel points in pairs to determine the shape of the flange part.
After the pixel points are marked, the shape formed by the marked pixel points in the airspace needs to be quantized, so that a modified value of the gradient weight is obtained according to the shape.
In one embodiment, the quantization is performed in the following manner: connecting the marked pixel points in pairs to obtain a connecting line; a normal to a connection line is determined, the normal passing through a midpoint of the connection line. After the connection is completed, the degree of intersection between all normals is analyzed. If all normals intersect at a point, the flange part is represented as a circular profile; if all normals are parallel to each other, it means that the flange part has a rectangular profile. For the circular contour flange in the scene, the final purpose of the scheme is to complement the position where the contour of the edge is incomplete, so that the gradient weight modification value is required to be modified according to the shape at the moment, and the purpose of complementing the contour defect is achieved.
Step S34: the weight modification value is determined based on the shape.
In step S12, the convolution kernel size of the operator is already determined, and at this time, the calculation mode of the operator and the output result need to be determined, where the calculation mode is determined by the basic contour, for example, under the contour of the circular flange, the calculation mode needs to add weights around the circular rule, so that the output result can complement the defect. The weight adding method around the circular rule comprises the following steps: the convolution kernel can completely cover the edge width, and compared with the gray values of the upper pixel point and the lower pixel point at the position of the common gradient output as the central pixel point, the convolution kernel can increase the weight in the direction formed by the marked pixel point connecting lines so as to meet the round rule, thereby enabling the increased output value to be more in line with the round flange outline.
In an embodiment, please refer to fig. 4, wherein in the 3 pi/4 direction, there is a connection line of the marked pixel points, and the connection line of the marked pixel points is a gradient direction, and the weight value in the connection line direction of the marked pixel points is increased to obtain the weight modification value.
Step S14: and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part.
Specifically, the flange part is processed by utilizing the optimized sobel operator to obtain a gradient output value; and determining contour information corresponding to the flange part based on the gradient output value.
In one embodiment, the gradient output value is obtained using the following equation (2):
wherein ,expressing the gradient output value of the sobel operator, < +.>Respectively indicate->Components in 4 directions; />Weight modification values respectively representing 4 directions, in the initial sobel operator, ++>N represents the number of pixel points marked in the connecting line composition direction, and in the optimized sobel operator,。
the necessity of the output point mark is obtained by quantizing the basic characteristics of the pixel point distribution in the convolution kernel, and the distribution form of the mark points in the airspace is quantized, so that the modification value of the weight is determined.
Dividing the original operators into two types, wherein one type is original gradient output, namely no mark point exists in the operators at the moment; the second is a gradient output containing weights, i.e. there are marker points in the operator at this time. And adding the logic to a sobel operator to obtain an optimized operator with weight. Processing the flange part by using the optimized sobel operator to obtain a gradient output value; and determining contour information corresponding to the flange part based on the gradient output value. Thereby enabling a finished edge profile image to be obtained.
According to the method, the prior characteristics of the flange part can be analyzed and quantized, and whether the pixel point distribution mode in the operator convolution kernel meets the prior characteristics or not can be judged, so that the follow-up judgment of the necessity of marking the output points and the modification of the weights can be carried out. Compared with the method for extracting the outline by directly using the sobel operator, the method can obtain the weight of each direction in the convolution kernel by marking the output points and analyzing the distribution form of the output points, so that the weight corresponds to the part which can identify the outline and supplement the incomplete part in the original image, and the problems of unclear and incomplete detected outline are effectively solved.
The foregoing is only the embodiments of the present invention, and therefore, the patent scope of the invention is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present invention and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the invention.
Claims (10)
1. The flange part contour visual positioning method is characterized by comprising the following steps of:
acquiring an image of the flange part, and converting the image into a gray image;
determining a convolution kernel size based on the grayscale image;
determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel;
and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part.
2. The flange part contour visual positioning method according to claim 1, wherein determining a convolution kernel size based on the gray scale image comprises:
detecting gray values of pixel points in a window in a sliding process by using two windows sliding in parallel in the gray image, and further determining abrupt pixel points, wherein the abrupt pixel points are pixel points corresponding to preset values when the gray values jump from 0 to the preset values;
connecting the abrupt pixel points, and determining a normal line from a perpendicular line of the connection line;
determining a localized area of the flange part based on the normal;
the convolution kernel size is determined based on the local region.
3. A flange part contour visual positioning method as defined in claim 2, wherein determining said convolution kernel size based on said localized area comprises:
calculating entropy of the local area based on probability of occurrence of different gray values in the local area;
determining a first pixel point when the entropy of the local area changes from 0 to not 0, and determining a second pixel point when the entropy of the local area changes from not 0 to 0;
and determining the distance between the first pixel point and the second pixel point, wherein the distance is the convolution kernel size.
4. A flange part contour visual positioning method according to claim 3, characterized in that calculating entropy of the local area based on probability of occurrence of different gray values in the local area comprises:
the entropy of the local region is calculated using the following equation (1):
5. The method of claim 1, wherein determining the weight modification value based on the distribution characteristics of the pixels within the convolution kernel comprises:
judging the necessity of marking the central pixel point in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixel points in the convolution kernel;
if the marking necessity is in the first preset range, not marking, and if the marking necessity is in the second preset range, marking;
the marked pixel points are connected in pairs, and the shape of the flange part is determined;
the weight modification value is determined based on the shape.
6. The method for visually locating a contour of a flange part according to claim 5, wherein the step of connecting the marked pixels in pairs to determine the shape of the flange part comprises:
connecting the marked pixel points in pairs to obtain a connecting line;
determining a normal line of a connecting line, the normal line passing through a midpoint of the connecting line;
if all normals intersect at a point, the flange part is represented as a circular profile; if all normals are parallel to each other, it means that the flange part has a rectangular profile.
7. The method of claim 5, wherein determining the weight modification value based on the shape comprises:
if the flange part is of a circular outline, the weight in the direction of the connecting line composition is increased, the weight in the other directions is reduced, and then the weight modification value is determined.
8. The method for visually locating a contour of a flange part according to claim 1, wherein the processing the flange part by using an optimized sobel operator to obtain contour information corresponding to the flange part comprises:
processing the flange part by using the optimized sobel operator to obtain a gradient output value;
and determining contour information corresponding to the flange part based on the gradient output value.
9. The method for visual positioning of a flange part contour according to claim 8, wherein the processing of the flange part by using the optimized sobel operator to obtain a gradient output value comprises:
obtaining a gradient output value by using the following formula (2):
wherein ,expressing the gradient output value of the sobel operator, < +.>Respectively indicate->Components in 4 directions; />Weight modification values respectively representing 4 directions, in the initial sobel operator, ++>N represents the number of pixel points marked in the connecting line composition direction, and in the optimized sobel operator,。
10. the flange part contour visual positioning method according to claim 5, wherein determining the necessity of marking the center pixel in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixels in the convolution kernel, comprises:
the necessity of marking the center pixel point in the convolution kernel is calculated using the following equation (3):
wherein ,indicating the necessity of marking the central pixel point in the convolution kernel,/-, for example>Pixels representing different gray values within the convolution kernel; />Representing the probability of occurrence of pixels of different gray values in the convolution kernel, +.>Respectively representing the maximum value and the minimum value of the gray value of the pixel point in the convolution kernel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310693060.5A CN116433700B (en) | 2023-06-13 | 2023-06-13 | Visual positioning method for flange part contour |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310693060.5A CN116433700B (en) | 2023-06-13 | 2023-06-13 | Visual positioning method for flange part contour |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116433700A true CN116433700A (en) | 2023-07-14 |
CN116433700B CN116433700B (en) | 2023-08-18 |
Family
ID=87083612
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310693060.5A Active CN116433700B (en) | 2023-06-13 | 2023-06-13 | Visual positioning method for flange part contour |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116433700B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116704209A (en) * | 2023-08-08 | 2023-09-05 | 山东顺发重工有限公司 | Quick flange contour extraction method and system |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5233670A (en) * | 1990-07-31 | 1993-08-03 | Thomson Trt Defense | Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing |
JP2000339478A (en) * | 1999-05-31 | 2000-12-08 | Nec Corp | Device and method for processing picture |
US20060152764A1 (en) * | 2005-01-13 | 2006-07-13 | Xerox Corporation | Systems and methods for controlling a tone reproduction curve using error diffusion |
US20070253640A1 (en) * | 2006-04-24 | 2007-11-01 | Pandora International Ltd. | Image manipulation method and apparatus |
RU2360289C1 (en) * | 2008-08-11 | 2009-06-27 | Евгений Александрович Самойлин | Method of noise-immune gradient detection of contours of objects on digital images |
CN106709909A (en) * | 2016-12-13 | 2017-05-24 | 重庆理工大学 | Flexible robot vision recognition and positioning system based on depth learning |
CN109472271A (en) * | 2018-11-01 | 2019-03-15 | 凌云光技术集团有限责任公司 | Printed circuit board image contour extraction method and device |
CN110687120A (en) * | 2019-09-18 | 2020-01-14 | 浙江工商大学 | Flange appearance quality detecting system |
WO2020103417A1 (en) * | 2018-11-20 | 2020-05-28 | 平安科技(深圳)有限公司 | Bmi evaluation method and device, and computer readable storage medium |
CN111696107A (en) * | 2020-08-05 | 2020-09-22 | 南京知谱光电科技有限公司 | Molten pool contour image extraction method for realizing closed connected domain |
CN111985329A (en) * | 2020-07-16 | 2020-11-24 | 浙江工业大学 | Remote sensing image information extraction method based on FCN-8s and improved Canny edge detection |
WO2020253062A1 (en) * | 2019-06-20 | 2020-12-24 | 平安科技(深圳)有限公司 | Method and apparatus for detecting image border |
WO2021000524A1 (en) * | 2019-07-03 | 2021-01-07 | 研祥智能科技股份有限公司 | Hole protection cap detection method and apparatus, computer device and storage medium |
CN113450292A (en) * | 2021-06-17 | 2021-09-28 | 重庆理工大学 | High-precision visual positioning method for PCBA parts |
CN115082410A (en) * | 2022-06-29 | 2022-09-20 | 西安工程大学 | Clamp spring defect detection method based on image processing |
CN115096206A (en) * | 2022-05-18 | 2022-09-23 | 西北工业大学 | Part size high-precision measurement method based on machine vision |
-
2023
- 2023-06-13 CN CN202310693060.5A patent/CN116433700B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5233670A (en) * | 1990-07-31 | 1993-08-03 | Thomson Trt Defense | Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing |
JP2000339478A (en) * | 1999-05-31 | 2000-12-08 | Nec Corp | Device and method for processing picture |
US20060152764A1 (en) * | 2005-01-13 | 2006-07-13 | Xerox Corporation | Systems and methods for controlling a tone reproduction curve using error diffusion |
US20070253640A1 (en) * | 2006-04-24 | 2007-11-01 | Pandora International Ltd. | Image manipulation method and apparatus |
RU2360289C1 (en) * | 2008-08-11 | 2009-06-27 | Евгений Александрович Самойлин | Method of noise-immune gradient detection of contours of objects on digital images |
CN106709909A (en) * | 2016-12-13 | 2017-05-24 | 重庆理工大学 | Flexible robot vision recognition and positioning system based on depth learning |
CN109472271A (en) * | 2018-11-01 | 2019-03-15 | 凌云光技术集团有限责任公司 | Printed circuit board image contour extraction method and device |
WO2020103417A1 (en) * | 2018-11-20 | 2020-05-28 | 平安科技(深圳)有限公司 | Bmi evaluation method and device, and computer readable storage medium |
WO2020253062A1 (en) * | 2019-06-20 | 2020-12-24 | 平安科技(深圳)有限公司 | Method and apparatus for detecting image border |
WO2021000524A1 (en) * | 2019-07-03 | 2021-01-07 | 研祥智能科技股份有限公司 | Hole protection cap detection method and apparatus, computer device and storage medium |
CN110687120A (en) * | 2019-09-18 | 2020-01-14 | 浙江工商大学 | Flange appearance quality detecting system |
CN111985329A (en) * | 2020-07-16 | 2020-11-24 | 浙江工业大学 | Remote sensing image information extraction method based on FCN-8s and improved Canny edge detection |
CN111696107A (en) * | 2020-08-05 | 2020-09-22 | 南京知谱光电科技有限公司 | Molten pool contour image extraction method for realizing closed connected domain |
CN113450292A (en) * | 2021-06-17 | 2021-09-28 | 重庆理工大学 | High-precision visual positioning method for PCBA parts |
CN115096206A (en) * | 2022-05-18 | 2022-09-23 | 西北工业大学 | Part size high-precision measurement method based on machine vision |
CN115082410A (en) * | 2022-06-29 | 2022-09-20 | 西安工程大学 | Clamp spring defect detection method based on image processing |
Non-Patent Citations (4)
Title |
---|
万鑫;万文;: "基于机器视觉的螺纹参数检测***设计", 南昌航空大学学报(自然科学版), no. 03 * |
伍济钢;宾鸿赞;: "薄片零件机器视觉图像亚像素边缘检测", 中国机械工程, no. 03 * |
化春键;熊雪梅;陈莹;: "基于Sobel算子的工件圆弧轮廓特征提取", 激光与光电子学进展, no. 02 * |
苑玮琦;董茜;桑海峰;: "基于方向梯度极值的手形轮廓跟踪算法", 光学精密工程, no. 07 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116704209A (en) * | 2023-08-08 | 2023-09-05 | 山东顺发重工有限公司 | Quick flange contour extraction method and system |
CN116704209B (en) * | 2023-08-08 | 2023-10-17 | 山东顺发重工有限公司 | Quick flange contour extraction method and system |
Also Published As
Publication number | Publication date |
---|---|
CN116433700B (en) | 2023-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114529549B (en) | Cloth defect labeling method and system based on machine vision | |
CN111383209B (en) | Unsupervised flaw detection method based on full convolution self-encoder network | |
CN113808138B (en) | Artificial intelligence-based wire and cable surface defect detection method | |
US7903880B2 (en) | Image processing apparatus and method for detecting a feature point in an image | |
CN113538433A (en) | Mechanical casting defect detection method and system based on artificial intelligence | |
CN111402248A (en) | Transmission line lead defect detection method based on machine vision | |
CN115345885A (en) | Method for detecting appearance quality of metal fitness equipment | |
US11416971B2 (en) | Artificial intelligence based image quality assessment system | |
CN116433700B (en) | Visual positioning method for flange part contour | |
CN115619793B (en) | Power adapter appearance quality detection method based on computer vision | |
CN114782329A (en) | Bearing defect damage degree evaluation method and system based on image processing | |
CN115601368B (en) | Sheet metal part defect detection method for building material equipment | |
CN115631116B (en) | Aircraft power inspection system based on binocular vision | |
CN115272336A (en) | Metal part defect accurate detection method based on gradient vector | |
CN115830004A (en) | Surface defect detection method, device, computer equipment and storage medium | |
CN116228780A (en) | Silicon wafer defect detection method and system based on computer vision | |
CN113537037A (en) | Pavement disease identification method, system, electronic device and storage medium | |
CN114998311A (en) | Part precision detection method based on homomorphic filtering | |
CN109544513A (en) | A kind of steel pipe end surface defect extraction knowledge method for distinguishing | |
CN115731166A (en) | High-voltage cable connector polishing defect detection method based on deep learning | |
CN115797314B (en) | Method, system, equipment and storage medium for detecting surface defects of parts | |
CN114913092B (en) | Online ferrograph reflected light image enhancement method and system | |
CN116310889A (en) | Unmanned aerial vehicle environment perception data processing method, control terminal and storage medium | |
CN115423765A (en) | Grain defect quantitative segmentation method based on template image | |
Zhu et al. | Quantitative assessment mechanism transcending visual perceptual evaluation for image dehazing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A visual positioning method for the contour of flange parts Granted publication date: 20230818 Pledgee: Weihai commercial bank Limited by Share Ltd. Ji'nan branch Pledgor: Shandong jinrunyuan flange Machinery Co.,Ltd. Registration number: Y2024980003836 |