CN113487679B - Visual ranging signal processing method for automatic focusing system of laser marking machine - Google Patents

Visual ranging signal processing method for automatic focusing system of laser marking machine Download PDF

Info

Publication number
CN113487679B
CN113487679B CN202110723762.4A CN202110723762A CN113487679B CN 113487679 B CN113487679 B CN 113487679B CN 202110723762 A CN202110723762 A CN 202110723762A CN 113487679 B CN113487679 B CN 113487679B
Authority
CN
China
Prior art keywords
level set
curve
image
evolution
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110723762.4A
Other languages
Chinese (zh)
Other versions
CN113487679A (en
Inventor
孙晶华
吴婧雯
张晓峻
王佳欢
张书明
朱怀武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110723762.4A priority Critical patent/CN113487679B/en
Publication of CN113487679A publication Critical patent/CN113487679A/en
Application granted granted Critical
Publication of CN113487679B publication Critical patent/CN113487679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a visual ranging signal processing method of an automatic focusing system of a laser marking machine, which comprises the following steps of S1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object; s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm; s3: stereo matching is carried out by utilizing binocular images to obtain image parallax; s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object; s5: and utilizing the GUI control to complete the interaction between the user and the program. The method can process the phenomena of fuzzy edges, uneven gray scale and the like of the image caused by factors such as noise, field offset effect and the like, has universality, can adapt to most of types of marked objects, and avoids the problem that the existing automatic focusing algorithm cannot adapt to the marked objects due to different sizes, positions and surface attributes.

Description

Visual ranging signal processing method for automatic focusing system of laser marking machine
Technical Field
The invention belongs to the field of laser marking, and particularly relates to a visual ranging signal processing method of an automatic focusing system of a laser marking machine.
Background
The laser marking machine generates high-energy laser beams to irradiate the surface of a workpiece, the generated heat energy etches the workpiece, the surface of the workpiece presents required character and graphics, and the marking quality is influenced by the accurate focusing degree of the marking machine on the workpiece, but the focusing method of the laser marking machine at the present stage is all deficient, so that the marking precision and the marking efficiency are influenced. In order to achieve precise focusing, various focusing methods have been proposed: chinese patent CN103350281a collects a series of images by adjusting the distance between the object and the image collecting device, obtains the edge information of each sequence image, and divides each sequence image into the same area. And acquiring the clear focusing position of each region so as to obtain accurate position information. The precision depends on the size of each image divided into the same area in the image sequence, if the divided area is large, marking precision is reduced, and the precision of height information is reduced by adopting an extraction and fusion rule. Chinese patent CN208991976U can drive the laser cavity to move up and down by setting a stepping motor and a transmission screw, and when the induction striker contacts the workpiece, the position of the origin is recorded, and then the laser cavity is moved by a set focal distance by a control device. The hardware device senses the movement of the striker, measures and records the position of the marked object, has high requirements on hardware basis of the device and does not have universality. Chinese patent CN110052704A obtains the depth information of the marked object through binocular stereo vision, extracts the characteristics after carrying out gray processing on the acquired image, and matches to obtain the depth information. The method needs to extract the outline of the marked object according to the image gray level information, but in practice, most marked objects have the phenomena of edge blurring, uneven gray level and the like due to noise, field offset effect and other factors, so the method is still deficient in practical application.
Laser marking machines are widely used in life and production, the types and sizes of marked articles are increasing day by day, and the requirements of precise marking work on the work of the laser marking machine are transferred to the accuracy of a focus and the diversity of marked materials. In real life, the size, the position and the surface attribute of marked objects are different, so that the existing focusing method cannot meet the requirements of production and life.
Disclosure of Invention
The invention aims to solve the problems that the focusing is difficult and inaccurate, the laser marking machine is suitable for single marked material, and the focusing precision is reduced due to overlapping of boundary gray levels in the conventional laser marking machine.
The purpose of the invention is realized as follows:
a visual ranging signal processing method of an automatic focusing system of a laser marking machine comprises the following steps:
s1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
s3: stereo matching is carried out by utilizing binocular images to obtain image parallax;
s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object;
s5: and utilizing the GUI control to complete the interaction between the user and the program.
The level set image segmentation method based on the active contour model constructs a gradient descent flow by using a variational principle, solves an energy functional minimum value point, and iteratively determines a convergence rule, wherein the step S2 specifically comprises the following steps:
s2-1: a level set equation is used for representing a specific evolution mode of the curve;
level set function Φ:
Figure BDA0003137691670000021
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: x takes positive when outside the closed curve and negative when inside the closed curve.
The method for representing curve evolution by using the high-dimensional level set function is as follows: assume a closed curve C, implicitly represented, which is the zero level set C (t = { (x, y), Φ (x, y, t) = C } of the time-varying high-dimensional level set function Φ (x, y, t), when Φ has regularity,
Figure BDA0003137691670000022
the evolution of the curve expressed by the basic equation of the level set can be obtained:
Figure BDA0003137691670000023
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C 0 Given the known conditions, the closed curve evolution can be represented by a level set elementary equation, corresponding to a given known initial condition Φ 0 And then solving a level set function phi, and satisfying a zero level set curve of phi (x, y, t) =0 at any time t.
S2-2: and solving the minimum value of the energy functional by using the gradient descending flow, and finally finishing the convergence of the evolution curve by using an iteration result.
In the one-dimensional case, the fixed endpoint condition u (x) is satisfied 0 )=a,u(x 1 ) A function of = b may be referred to as functional form:
Figure BDA0003137691670000024
after mathematical transformation, the gradient descending flow form is obtained:
Figure BDA0003137691670000025
the step S3 specifically comprises the following steps:
s3-1: using a point-like feature descriptor as a matching primitive;
s3-2: the SSD measurement function is adopted as a similarity measurement function to complete corresponding point matching;
s3-3: and performing global matching by using a dynamic programming method to obtain a final matching result of the matching point set, namely obtaining the matched image parallax.
Compared with the prior art, the invention has the beneficial effects that:
the method adopts a binocular camera to collect the image of the object to be marked below a laser marking machine head, and cuts the image by a level set method based on an active contour model to finish the accurate extraction of the contour; and the intersection point of the level set method and the global matching for the cost energy function is utilized, the precision and the algorithm processing speed are improved through temporary data storage, and the depth information extraction of the marked object is realized. The method has the advantages that the phenomena of fuzzy edges, uneven gray scale and the like of the image caused by factors such as noise, field offset effect and the like can be processed, the automatic focusing method of the laser marking machine has universality, can be adapted to most kinds of marked objects, avoids the problem that the existing automatic focusing algorithm cannot be adapted due to different sizes, positions and surface attributes among the marked objects, and solves the problem that the local gray scale attribute of the material is changed due to the material permeation effect caused by the environment humidity and the abnormal marked objects which cannot be processed by the algorithm; when the marked object with larger thickness is processed, the gray level stacking phenomenon caused by the complex edge structure reduces the image space resolution and makes the object boundary difficult to be defined; if the factory environment is complex, the setting frame of the laser marking machine vibrates or has micro displacement, and at the moment, the camera and the laser marking machine move periodically or non-periodically to cause the motion artifact and other conditions in the image.
Drawings
FIG. 1 is a flow chart of an image segmentation algorithm of the present invention
FIG. 2 is a diagram of the graphical user interface effect of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The invention discloses a visual ranging signal processing method of an automatic focusing system of a laser marking machine, which comprises the following steps:
s1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
s3: stereo matching is carried out by utilizing binocular images to obtain image parallax;
s4: and calculating to obtain the depth of the corner point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object.
S5: and utilizing the GUI control to complete the interaction between the user and the program.
Further: the method for extracting the edge contour of the marked object by adopting the image segmentation algorithm comprises the following steps:
s2-1: a level set equation is used for representing a specific evolution mode of the curve;
s2-2: and solving the minimum value of the energy functional by using the gradient descending flow, and finally finishing the convergence of the evolution curve by using an iteration result.
Further: the method for acquiring the image parallax by utilizing the binocular image to perform stereo matching comprises the following steps:
s3-1: using a dotted feature descriptor as a matching primitive;
s3-2: the SSD measurement function is adopted as a similarity measurement function to complete corresponding point matching;
s3-3: and performing global matching by using a dynamic programming method to obtain a final matching result of the matching point set, namely obtaining the matched image parallax.
S1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
determining internal geometric parameters and optical characteristics of the camera, and determining a rotation matrix and a translation matrix in the process of converting a camera coordinate system into a world coordinate system. Determining internal and external parameters and distortion parameters of a camera by calibrating a binocular camera, constructing a geometric model of camera imaging, and determining a geometric position relation between a pixel point in an image and a surface point of a space object in a three-dimensional space.
S2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
the image segmentation algorithm flow chart is shown in fig. 1. The level set segmentation method based on the active contour model comprises the following 5 steps: (1) an initialization curve 1 is given. (2) determining additional constraint 2. And (3) iteratively determining a convergence rule 3. And (4) obtaining an evolution curve 4 of the convergence at the edge of the image. And (5) realizing target object segmentation 5 according to the curve.
Level set function Φ:
Figure BDA0003137691670000041
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: x takes positive when outside the closed curve and negative when inside the closed curve.
The method for representing curve evolution by using the high-dimensional level set function is as follows: assume a closed curve C, implicitly represented, which is the zero level set C (t = { (x, y), Φ (x, y, t) = C } of the time-varying high-dimensional level set function Φ (x, y, t), when Φ has regularity,
Figure BDA0003137691670000042
the evolution of the curve expressed by the basic equation of the level set can be obtained:
Figure BDA0003137691670000043
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C 0 Given the known situation, the closed curve evolution can be represented by a level set elementary equation, corresponding to a given known initial condition Φ 0 And then solving the level set function phi, and satisfying a zero level set curve of phi (x, y, t) =0 at any time t.
Image segmentation is determined by minimizing the curve energy functional. In the one-dimensional case, the fixed endpoint condition u (x) is satisfied 0 )=a,u(x 1 ) A function of = b may be referred to as functional form:
Figure BDA0003137691670000051
from Fermat's theorem in the calculus, the point where the partial derivative satisfies the change to zero can be taken as the minimum value of the functional E (phi), and when the optimal solution phi (x) contains perturbation, phi (x) + v (x) is obtained, and the corresponding energy functional is E (phi + v). The energy functional E (phi) takes an extreme value, the perturbation v (x) is small enough not to affect the value of the energy functional u (x) 0 )+v(x 0 )=a,u(x 1 )+v(x 1 ) = b, and u (x) 0 )=a,u(x 1 ) = b, thus yielding the equation:
Figure BDA0003137691670000052
the perturbation v (x) and its derivative are sufficiently small to be measured using the Taylor expansion function F:
Figure BDA0003137691670000053
finishing to obtain:
Figure BDA0003137691670000054
and (4) sorting by a fractional integration method to obtain an Euler (Euler) equation form of the variation problem:
Figure BDA0003137691670000055
the energy functional minimum can now be solved by solving the euler equation of the variational problem. The euler equation is a typical nonlinear partial differential equation, a time variable needs to be introduced during solution, and the equation is converted into a dynamic partial differential equation by a gradient descent flow method (gradient device flow), so that a gradient descent flow form corresponding to the variational equation is obtained:
Figure BDA0003137691670000056
s3: performing stereo matching by using binocular images to obtain image parallax;
and after selecting the matching elements, performing horizontal epipolar line matching search along the recovered binocular stereo graph pair to further obtain the corresponding relation of each pixel point pair in the matched image, and calculating a disparity map for evaluation. The stereo matching algorithm utilizes basic assumptions and specific constraint conditions to eliminate the inapplicability of stereo matching, and converts the stereo matching problem into the problem of solving the optimal solution of an energy function.
The global cost energy function is:
Figure BDA0003137691670000061
the dynamic programming algorithm performs global energy optimization calculation on each line of the image, finds a matching sequence M in each line, so that y (M) obtains a minimum value, y (M) represents the matching cost corresponding to the matching sequence M, and the number of occlusion points in the matching sequence M is N occ Including left occlusion and right occlusion, representing the occluded parts of the left and right images, respectively, in the binocular image, with an occlusion factor k occ Showing that enough difference information needs to be added to the blurred image boundary caused by the occlusion of the binocular view, and the number of the corresponding points successfully matched in the matching sequence M is N m The maximum difference between the matched corresponding points is determined by a matching factor k r The dissimilarity of the single matching point is shown as DSI (x) i ,y i ) And the summation represents the global dissimilarity and is used as a measuring standard for solving the corresponding point.
And measuring the similarity degree of corresponding points of two images in the binocular images by using a similarity measurement function, and improving the anti-noise performance during similarity measurement by adopting window accumulation. And calculating the similarity measure function by using an SSD operator.
Squared Difference Sum SSD (Sum of Squared Difference):
C(u,v,d)=∑ (i,j)∈W |I l (u+i,v+j)-I r (u+i+d,v+j)| (10)
s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object;
the method comprises the steps of observing the same scene by using two or more different viewpoints, obtaining pictures at different view angles, and measuring and calculating the position offset between picture pixels through a visual ranging model, namely the parallax existing between two pictures, so as to obtain the three-dimensional information of an object.
The projection difference generated when the left camera and the right camera observe the same scene is called as parallax, and can be obtained according to the triangulation principle:
Figure BDA0003137691670000062
Figure BDA0003137691670000063
the base length b of the two cameras, the depth of field Z are the distance from an object point to the base line in the three-dimensional world, and the focal length of the cameras is f. The corresponding position of the left camera and the right camera in the digital image is x r 、x l
Under an ideal binocular vision model, the imaging planes of the left camera and the right camera are parallel to a base line, and the parallax is the difference between the horizontal coordinates of corresponding image points on the two projection images: d = x r -x l . The depth of field is proportional to the camera focal length and the base length, and inversely proportional to the parallax.
S5: utilizing the GUI control to complete the interaction between the user and the program;
the user graphical interface is shown in fig. 2. And adding controls to the window by using a layout editor, and finishing the interaction between the user and the program by using the GUI controls. And controlling a pointer on a screen or moving a cursor by using a mouse or other access equipment, and informing an application program by clicking a left mouse button to realize the selection of an object or execute other operations. The M file for generating the GUI in the GUIDE environment can control the GUI internal components, so that the GUI internal module completes corresponding response according to the operation of the user, and the specific response is realized by Callback (Callback).
The method comprises the steps of placing a marked object under a laser marking machine, adjusting the marked object to enable the marked object to be a complete clear image in a left camera and a right camera of a binocular stereoscopic vision device, starting a pre-marking function of the laser marking machine after clicking 'placing the object', and popping out a video screenshot of the binocular cameras. And clicking the 'height of the displayed object' to run a pre-designed algorithm, sequentially running to finish a level set algorithm based on an active contour model, finishing a stereo matching algorithm of binocular stereo vision, finally extracting the depth information of the marked object, and displaying the calculated height of the laser marking machine head in a preset blank frame beside a 'height when marking is displayed' button. After the depth information extraction module is completed, clicking to start marking, closing the performed focusing algorithm, adjusting the height of the machine head of the laser marking machine to enable the emergent laser to be just focused on the upper surface of the marked object, and operating the subsidiary program of the laser marking machine to complete the marking pattern task set in advance in the marking program.
The binocular stereoscopic vision experimental platform consists of a laser marking machine, a true color binocular camera and a computer. The software adopts simulation to calculate, and comprises the following four modules: the system comprises a binocular camera calibration module, an image segmentation module, a stereo matching and depth information extraction module and a User Graphical Interface (GUI) module. The binocular camera calibration module determines camera parameters, the image segmentation module extracts the boundary of a marked object, the stereo matching and depth information extraction module extracts depth information of an image, and the GUI module realizes man-machine interaction.
The binocular camera is fixed in the top of laser head, and the base line midpoint of binocular camera is in the position of the axis position binocular camera level of laser head and is fixed in the planar position of parallel workstation, is parallel with the laser head horizontal direction promptly.
A binocular vision ranging model is constructed based on a computer vision theory, and relevant parameters of the model are determined by adopting a Zhang Zhengyou calibration algorithm.
The level set image segmentation method based on the active contour model constructs a gradient descent flow by using a variational principle, solves an energy functional minimum value point, and iteratively determines a convergence rule.
S1: a level set equation is used for representing a specific evolution mode of the curve;
level set function Φ:
Figure BDA0003137691670000071
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: x is positive when outside the closed curve and negative when inside the closed curve.
The method for representing curve evolution by using the high-dimensional level set function is as follows: assume a closed curve C, implicitly represented, which is the zero level set C (t = { (x, y), Φ (x, y, t) = C } of the time-varying high-dimensional level set function Φ (x, y, t), when Φ has regularity,
Figure BDA0003137691670000081
the evolution of the curve expressed by the basic equation of the level set can be obtained:
Figure BDA0003137691670000082
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C 0 Given the known situation, the closed curve evolution can be represented by a level set elementary equation, corresponding to a given known initial condition Φ 0 And then solving the level set function phi, and satisfying a zero level set curve of phi (x, y, t) =0 at any time t.
S2: and solving the minimum value of the energy functional by using the gradient descending flow, and finally finishing the convergence of the evolution curve by using an iteration result.
In the one-dimensional case, the fixed endpoint condition u (x) is satisfied 0 )=a,u(x 1 ) A function of = b may be referred to as functional form:
Figure BDA0003137691670000083
after mathematical transformation, the gradient descending flow form is obtained:
Figure BDA0003137691670000084
and performing global matching by adopting a dynamic programming method, and calculating the similarity measure function by adopting an SSD operator.
And designing a GUI (graphical user interface) based on a user graphic development interface to realize human-computer interaction.
The invention discloses a binocular stereo vision based vision ranging signal processing method for an automatic focusing system of a laser marking machine, which can finish automatic focusing work of an undefined marking workpiece. The method comprises the following steps: (1) And (4) carrying out left and right camera image acquisition on the target object by using a binocular camera. (2) And calibrating the binocular camera by using a calibration box to obtain the internal and external parameters of the camera. (3) And segmenting the acquired image based on an active contour model and a level set method, and extracting the boundary of the marked object. (4) And extracting depth information by using a binocular vision stereo matching principle. And (5) constructing a user graphical interface to realize human-computer interaction. The method provided by the invention can be used for minimizing the energy function in the evolution curve model instead of gray level change when the boundary information of the target object is acquired, and can be used for processing the phenomena of edge blurring, gray level unevenness and the like of the image caused by noise, field offset effect and other factors, thereby reducing the difficulty of image identification and improving the accuracy of the whole algorithm. The automatic focusing method of the laser marking machine has universality, can adapt to most kinds of marked objects, and can solve the problem that the existing automatic focusing algorithm cannot adapt to the marked objects due to different sizes, positions and surface attributes of the marked objects.

Claims (2)

1. A visual ranging signal processing method of an automatic focusing system of a laser marking machine is characterized by comprising the following steps:
s1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
s3: stereo matching is carried out by utilizing binocular images to obtain image parallax;
s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object;
s5: utilizing the GUI control to complete the interaction between the user and the program;
the level set image segmentation method based on the active contour model constructs a gradient descent flow by using a variational principle, solves the minimum value point of an energy functional, and iteratively determines a convergence rule, wherein the step S2 specifically comprises the following steps:
s2-1: a level set equation is used for representing a specific evolution mode of the curve;
level set function Φ:
Figure FDA0003809868580000011
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: taking positive when x is outside the closed curve, and taking negative when x is inside the closed curve;
the method for representing curve evolution by using the high-dimensional level set function is as follows: assuming a closed curve C implicitly represented, which is a zero level set C of a time-varying high-dimensional level set function Φ (x, y, t) = { (x, y), Φ (x, y, t) = C }); when phi is of a regular nature, then,
Figure FDA0003809868580000012
and obtaining the evolution of the curve expressed by a level set basic equation:
Figure FDA0003809868580000013
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C 0 Under known conditions, the evolution of a closed curve is expressed by a level set basic equation, and given known initial conditions phi 0 Then, the level set function phi is solved,and the zero level set curve of phi (x, y, t) =0 is satisfied at any time t;
s2-2: solving the minimum value of the energy functional by using gradient descending flow, and finally finishing convergence of the evolution curve by using an iteration result;
in the one-dimensional case, the fixed endpoint condition u (x) is satisfied 0 )=a,u(x 1 ) The function of = b is called functional form:
Figure FDA0003809868580000014
after mathematical transformation, the gradient descending flow form is obtained:
Figure FDA0003809868580000021
2. the method for processing the visual ranging signal of the automatic focusing system of the laser marking machine according to claim 1, wherein the step S3 specifically comprises the following steps:
s3-1: using a dotted feature descriptor as a matching primitive;
s3-2: the SSD measurement function is adopted as a similarity measurement function to complete corresponding point matching;
s3-3: and performing global matching by using a dynamic programming method to obtain a final matching result of the matching point set, namely obtaining the matched image parallax.
CN202110723762.4A 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine Active CN113487679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110723762.4A CN113487679B (en) 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110723762.4A CN113487679B (en) 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine

Publications (2)

Publication Number Publication Date
CN113487679A CN113487679A (en) 2021-10-08
CN113487679B true CN113487679B (en) 2023-01-03

Family

ID=77936185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110723762.4A Active CN113487679B (en) 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine

Country Status (1)

Country Link
CN (1) CN113487679B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN109523528A (en) * 2018-11-12 2019-03-26 西安交通大学 A kind of transmission line of electricity extracting method based on unmanned plane binocular vision SGC algorithm
CN110874572A (en) * 2019-10-29 2020-03-10 北京海益同展信息科技有限公司 Information detection method and device and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7724954B2 (en) * 2005-11-14 2010-05-25 Siemens Medical Solutions Usa, Inc. Method and system for interactive image segmentation
US7925087B2 (en) * 2006-11-14 2011-04-12 Siemens Aktiengesellschaft Method and system for image segmentation by evolving radial basis functions
KR20130120730A (en) * 2012-04-26 2013-11-05 한국전자통신연구원 Method for processing disparity space image
CN107301664A (en) * 2017-05-25 2017-10-27 天津大学 Improvement sectional perspective matching process based on similarity measure function
CN109813251B (en) * 2017-11-21 2021-10-01 蒋晶 Method, device and system for three-dimensional measurement
CN108714741A (en) * 2018-04-11 2018-10-30 哈尔滨工程大学 A kind of automatic focusing portable laser marking machine
CN108629812A (en) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 A kind of distance measuring method based on binocular camera
CN110052704B (en) * 2019-05-21 2021-04-20 哈尔滨工程大学 Laser marking machine workbench capable of automatically positioning and focusing marked workpiece
CN111709985B (en) * 2020-06-10 2023-07-07 大连海事大学 Underwater target ranging method based on binocular vision
CN112862834B (en) * 2021-01-14 2024-05-03 江南大学 Image segmentation method based on visual salient region and active contour
CN114187246A (en) * 2021-11-29 2022-03-15 哈尔滨工程大学 Focal length measuring method of laser marking machine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN109523528A (en) * 2018-11-12 2019-03-26 西安交通大学 A kind of transmission line of electricity extracting method based on unmanned plane binocular vision SGC algorithm
CN110874572A (en) * 2019-10-29 2020-03-10 北京海益同展信息科技有限公司 Information detection method and device and storage medium

Also Published As

Publication number Publication date
CN113487679A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN111783820B (en) Image labeling method and device
JP4785880B2 (en) System and method for 3D object recognition
JP4677536B1 (en) 3D object recognition apparatus and 3D object recognition method
CN112785643A (en) Indoor wall corner two-dimensional semantic map construction method based on robot platform
CN103678754B (en) Information processor and information processing method
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN111523547B (en) 3D semantic segmentation method and terminal
CN111623942B (en) Displacement measurement method for test structure model of unidirectional vibration table
CN114022542A (en) Three-dimensional reconstruction-based 3D database manufacturing method
Intwala et al. A review on process of 3d model reconstruction
CN113393503A (en) Classification-driven shape prior deformation category-level object 6D pose estimation method
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN113103226A (en) Visual guide robot system for ceramic biscuit processing and manufacturing
CN117218192A (en) Weak texture object pose estimation method based on deep learning and synthetic data
CN113487679B (en) Visual ranging signal processing method for automatic focusing system of laser marking machine
CN112365600B (en) Three-dimensional object detection method
CN115861547A (en) Model surface sample line generation method based on projection
CN113129348B (en) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene
CN115601430A (en) Texture-free high-reflection object pose estimation method and system based on key point mapping
CN109377562B (en) Viewpoint planning method for automatic three-dimensional measurement
CN111612071B (en) Deep learning method for generating depth map from curved surface part shadow map
JP2002350131A (en) Calibration method for and apparatus of multiocular camera and computer program
CN114719759B (en) Object surface perimeter and area measurement method based on SLAM algorithm and image instance segmentation technology
CN115420277B (en) Object pose measurement method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant