CN113487679A - Visual ranging signal processing method for automatic focusing system of laser marking machine - Google Patents

Visual ranging signal processing method for automatic focusing system of laser marking machine Download PDF

Info

Publication number
CN113487679A
CN113487679A CN202110723762.4A CN202110723762A CN113487679A CN 113487679 A CN113487679 A CN 113487679A CN 202110723762 A CN202110723762 A CN 202110723762A CN 113487679 A CN113487679 A CN 113487679A
Authority
CN
China
Prior art keywords
level set
curve
image
evolution
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110723762.4A
Other languages
Chinese (zh)
Other versions
CN113487679B (en
Inventor
孙晶华
吴婧雯
张晓峻
王佳欢
张书明
朱怀武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110723762.4A priority Critical patent/CN113487679B/en
Publication of CN113487679A publication Critical patent/CN113487679A/en
Application granted granted Critical
Publication of CN113487679B publication Critical patent/CN113487679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a visual ranging signal processing method of an automatic focusing system of a laser marking machine, comprising the following steps of S1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object; s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm; s3: stereo matching is carried out by utilizing binocular images to obtain image parallax; s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object; s5: and utilizing the GUI control to complete the interaction between the user and the program. The method can process the phenomena of fuzzy edges, uneven gray scale and the like of the image caused by factors such as noise, field offset effect and the like, has universality, can adapt to most of types of marked objects, and avoids the problem that the existing automatic focusing algorithm cannot adapt to the marked objects due to different sizes, positions and surface attributes.

Description

Visual ranging signal processing method for automatic focusing system of laser marking machine
Technical Field
The invention belongs to the field of laser marking, and particularly relates to a visual ranging signal processing method of an automatic focusing system of a laser marking machine.
Background
The laser marking machine generates high-energy laser beams to irradiate the surface of a workpiece, the generated heat energy etches the workpiece, the surface of the workpiece presents required character and graphics, and the marking quality is influenced by the accurate focusing degree of the marking machine on the workpiece, but the focusing method of the laser marking machine at the present stage is all deficient, so that the marking precision and the marking efficiency are influenced. In order to achieve precise focusing, various focusing methods have been proposed: chinese patent CN103350281A acquires a series of images by adjusting the distance between the object and the image acquisition device, obtains the edge information of each sequence of images, and divides each sequence of images into the same region. And acquiring the clear focusing position of each region so as to obtain accurate position information. The precision depends on the size of each image divided into the same area in the image sequence, if the divided area is large, marking precision is reduced, and the precision of height information is reduced by adopting an extraction and fusion rule. Chinese patent CN208991976U can drive the laser cavity to move up and down by setting a stepping motor and a transmission lead screw, records the original position when the induction striker contacts the workpiece, and then moves the laser cavity by the set focal distance through the control device. The position of the marked object is recorded by sensing the movement of the striker through hardware equipment, so that the requirement on the hardware basis of the equipment is high and the universality is not realized. Chinese patent CN110052704A obtains the depth information of the marked object through binocular stereo vision, extracts the characteristics after carrying out gray processing on the collected image, and matches to obtain the depth information. The method needs to extract the outline of the marked object according to the image gray level information, but in practice, most marked objects have the phenomena of edge blurring, uneven gray level and the like due to noise, field offset effect and other factors, so the method is still deficient in practical application.
Laser marking machines are widely used in life and production, the types and sizes of marked articles are increasing day by day, and the requirements of precise marking work on the work of the laser marking machine are transferred to the accuracy of a focus and the diversity of marked materials. In real life, the size, the position and the surface attribute of marked objects are different, so that the existing focusing method cannot meet the requirements of production and life.
Disclosure of Invention
The invention aims to solve the problems that the focusing is difficult and inaccurate, the laser marking machine is suitable for single marked material, and the focusing precision is reduced due to overlapping of boundary gray levels in the conventional laser marking machine.
The purpose of the invention is realized as follows:
a visual ranging signal processing method of an automatic focusing system of a laser marking machine comprises the following steps:
s1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
s3: stereo matching is carried out by utilizing binocular images to obtain image parallax;
s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object;
s5: and utilizing the GUI control to complete the interaction between the user and the program.
The level set image segmentation method based on the active contour model constructs a gradient descent flow by using a variational principle, solves an energy functional minimum value point, and iteratively determines a convergence rule, wherein the step S2 specifically comprises the following steps:
s2-1: a level set equation is used for representing a specific evolution mode of the curve;
level set function Φ:
Figure BDA0003137691670000021
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: x takes positive when outside the closed curve and negative when inside the closed curve.
The method for representing curve evolution by using the high-dimensional level set function is as follows: assuming an implicitly represented closed curve C, which is a zero level set C { (x, y), Φ (x, y, t) ═ C } of a time-varying high-dimensional level set function Φ (x, y, t), when Φ has a regularity,
Figure BDA0003137691670000022
the evolution of the curve expressed by the basic equation of the level set can be obtained:
Figure BDA0003137691670000023
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C0Given the known situation, the closed curve evolution can be represented by a level set elementary equation, corresponding to a given known initial condition Φ0Then, the level set function Φ is solved, and the zero level set curve where Φ (x, y, t) is 0 is satisfied at any time t.
S2-2: and solving the minimum value of the energy functional by using the gradient descending flow, and finally finishing the convergence of the evolution curve by using an iteration result.
In the one-dimensional case, the fixed endpoint condition u (x) is satisfied0)=a,u(x1) The function of b may be referred to as functional form:
Figure BDA0003137691670000024
after mathematical transformation, the gradient descending flow form is obtained:
Figure BDA0003137691670000025
the step S3 specifically includes:
s3-1: using a dotted feature descriptor as a matching primitive;
s3-2: the SSD measurement function is adopted as a similarity measurement function to complete corresponding point matching;
s3-3: and performing global matching by using a dynamic programming method to obtain a final matching result of the matching point set, namely obtaining the matched image parallax.
Compared with the prior art, the invention has the beneficial effects that:
the method adopts a binocular camera to collect the image of the object to be marked below a laser marking machine head, and cuts the image by a level set method based on an active contour model to finish the accurate extraction of the contour; and the intersection point of the level set method and the global matching for the cost energy function is utilized, the precision and the algorithm processing speed are improved through temporary data storage, and the depth information extraction of the marked object is realized. The method has the advantages that the phenomena of fuzzy edges, uneven gray scale and the like of the image caused by factors such as noise, field offset effect and the like can be processed, the automatic focusing method of the laser marking machine has universality, can be adapted to most kinds of marked objects, avoids the problem that the existing automatic focusing algorithm cannot be adapted due to different sizes, positions and surface attributes among the marked objects, and solves the problem that the local gray scale attribute of the material is changed due to the material permeation effect caused by the environment humidity and the abnormal marked objects which cannot be processed by the algorithm; when the marked object with larger thickness is processed, the gray level stacking phenomenon caused by the complex edge structure reduces the image space resolution and makes the object boundary difficult to be defined; if the factory environment is complex, the setting frame of the laser marking machine vibrates or has micro displacement, and at the moment, the camera and the laser marking machine move periodically or non-periodically to cause the motion artifact and other conditions in the image.
Drawings
FIG. 1 is a flow chart of an image segmentation algorithm of the present invention
FIG. 2 is a diagram of the effect of the GUI according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The invention discloses a visual ranging signal processing method of an automatic focusing system of a laser marking machine, which comprises the following steps:
s1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
s3: stereo matching is carried out by utilizing binocular images to obtain image parallax;
s4: and calculating to obtain the depth of the corner point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object.
S5: and utilizing the GUI control to complete the interaction between the user and the program.
Further: the method for extracting the edge contour of the marked object by adopting the image segmentation algorithm comprises the following steps:
s2-1: a level set equation is used for representing a specific evolution mode of the curve;
s2-2: and solving the minimum value of the energy functional by using the gradient descending flow, and finally finishing the convergence of the evolution curve by using an iteration result.
Further: the method for obtaining the image parallax by using the binocular image to perform stereo matching comprises the following steps:
s3-1: using a dotted feature descriptor as a matching primitive;
s3-2: the SSD measurement function is adopted as a similarity measurement function to complete corresponding point matching;
s3-3: and performing global matching by using a dynamic programming method to obtain a final matching result of the matching point set, namely obtaining the matched image parallax.
S1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
determining internal geometric parameters and optical characteristics of the camera, and determining a rotation matrix and a translation matrix in the process of converting a camera coordinate system into a world coordinate system. Determining internal and external parameters and distortion parameters of a camera by calibrating a binocular camera, constructing a geometric model of camera imaging, and determining a geometric position relation between a pixel point in an image and a surface point of a space object in a three-dimensional space.
S2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
the image segmentation algorithm flow chart is shown in fig. 1. The level set segmentation method based on the active contour model comprises the following 5 steps: (1) an initialization curve 1 is given. (2) Additional constraint 2 is determined. (3) The convergence rule 3 is iteratively determined. (4) And obtaining an evolution curve 4 of convergence at the edge of the image. (5) The target object segmentation 5 is achieved according to the curve.
Level set function Φ:
Figure BDA0003137691670000041
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: x takes positive when outside the closed curve and negative when inside the closed curve.
The method for representing curve evolution by using the high-dimensional level set function is as follows: assuming an implicitly represented closed curve C, which is a zero level set C { (x, y), Φ (x, y, t) ═ C } of a time-varying high-dimensional level set function Φ (x, y, t), when Φ has a regularity,
Figure BDA0003137691670000042
the evolution of the curve expressed by the basic equation of the level set can be obtained:
Figure BDA0003137691670000043
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C0Given the known situation, the closed curve evolution can be represented by a level set elementary equation, corresponding to a given known initial condition Φ0Then, the level set function Φ is solved, and the zero level set curve where Φ (x, y, t) is 0 is satisfied at any time t.
Image segmentation is determined by minimizing the curve energy functional. In the one-dimensional case, the fixed endpoint condition u (x) is satisfied0)=a,u(x1) The function of b may be referred to as functional form:
Figure BDA0003137691670000051
from Fermat's theorem in the calculus, the point where the partial derivative satisfies the change to zero can be taken as the minimum value of the functional E (phi), when the optimal solution phi (x) contains perturbation, phi (x) + v (x) is obtained, and the corresponding energy functional is E (phi + v). The energy functional E (phi) takes an extreme value, the perturbation v (x) is small enough not to affect the value of the energy functional, u (x)0)+v(x0)=a,u(x1)+v(x1) B, and u (x)0)=a,u(x1) B, thus yielding the equation:
Figure BDA0003137691670000052
the perturbation v (x) and its derivative are sufficiently small to be measured using the Taylor expansion function F:
Figure BDA0003137691670000053
finishing to obtain:
Figure BDA0003137691670000054
and (3) sorting by a fractional integration method to obtain an Euler equation form of the variation problem:
Figure BDA0003137691670000055
the energy functional minimum can now be solved by solving the euler equation of the variational problem. The euler equation is a typical nonlinear partial differential equation, a time variable needs to be introduced during solution, and the equation is converted into a dynamic partial differential equation by a gradient descent flow method (gradient device flow), so that a gradient descent flow form corresponding to the variational equation is obtained:
Figure BDA0003137691670000056
s3: stereo matching is carried out by utilizing binocular images to obtain image parallax;
and after selecting the matching elements, performing horizontal epipolar line matching search along the recovered binocular stereo graph pair to further obtain the corresponding relation of each pixel point pair in the matched image, and calculating a disparity map for evaluation. The stereo matching algorithm utilizes basic hypothesis and specific constraint conditions to eliminate the inapplicability of stereo matching, and converts the stereo matching problem into the problem of solving the optimal solution of an energy function.
The global cost energy function is:
Figure BDA0003137691670000061
the dynamic programming algorithm performs global energy optimization calculation on each line of the image, finds a matching sequence M in each line, so that y (M) obtains a minimum value, y (M) represents the matching cost corresponding to the matching sequence M, and the number of occlusion points in the matching sequence M is NoccIncluding left occlusion and right occlusion, representing the occluded parts of the left and right images, respectively, in the binocular image, with an occlusion factor koccIndicating an obstructed view of eyesEnough difference information needs to be added to the unclear image boundary caused by the blocks, and the number of corresponding points successfully matched in the matching sequence M is NmThe maximum difference between the matched corresponding points is determined by a matching factor krThe dissimilarity of the single matching point is shown as DSI (x)i,yi) And the summation represents the global dissimilarity and is used as a measuring standard for solving the corresponding point.
And measuring the similarity degree of corresponding points of two images in the binocular images by using a similarity measurement function, and improving the anti-noise performance during similarity measurement by adopting window accumulation. And calculating the similarity measure function by adopting an SSD operator.
Squared difference sum SSD (sum of Squared difference):
C(u,v,d)=∑(i,j)∈W|Il(u+i,v+j)-Ir(u+i+d,v+j)| (10)
s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object;
the method comprises the steps of observing the same scene by using two or more different viewpoints to obtain pictures under different viewing angles, and measuring and calculating the position offset between picture pixels through a visual ranging model, namely the parallax existing between two pictures, so as to obtain the three-dimensional information of an object.
The projection difference generated when the left camera and the right camera observe the same scene is called as parallax, and can be obtained according to the triangulation principle:
Figure BDA0003137691670000062
Figure BDA0003137691670000063
the base length b of the two cameras, the depth of field Z are the distance from an object point to the base line in the three-dimensional world, and the focal length of the cameras is f. The corresponding position of the left camera and the right camera in the digital image is xr、xl
In the field of chemical engineeringUnder the binocular vision model, the imaging planes of the left camera and the right camera are parallel to the base line, and the parallax is the difference of the horizontal coordinates of corresponding image points on the two projection images: d ═ xr-xl. The depth of field is proportional to the camera focal length and the baseline length, and inversely proportional to the parallax.
S5: utilizing the GUI control to complete the interaction between the user and the program;
the user graphical interface is shown in fig. 2. And adding controls to the window by using a layout editor, and finishing the interaction between the user and the program by using the GUI controls. And controlling a pointer on a screen or moving a cursor by using a mouse or other access equipment, and informing an application program by clicking a left mouse button to realize the selection of an object or execute other operations. The M file for generating the GUI in the GUIDE environment can control the GUI internal components, so that the GUI internal module completes corresponding response according to the operation of the user, and the specific response is realized by Callback (Callback).
The method comprises the steps of placing a marked object under a laser marking machine, adjusting the marked object to enable the marked object to be a complete clear image in a left camera and a right camera of a binocular stereoscopic vision device, starting a pre-marking function of the laser marking machine after clicking 'placing the object', and popping out a video screenshot of the binocular cameras. And clicking the 'height of the displayed object' to run a pre-designed algorithm, sequentially running to finish a level set algorithm based on an active contour model, finishing a stereo matching algorithm of binocular stereo vision, finally extracting the depth information of the marked object, and displaying the calculated height of the laser marking machine head in a preset blank frame beside a 'height when marking is displayed' button. After the depth information extraction module is completed, clicking to start marking, closing the performed focusing algorithm, adjusting the height of the machine head of the laser marking machine to enable the emergent laser to be just focused on the upper surface of the marked object, and operating the subsidiary program of the laser marking machine to complete the marking pattern task set in advance in the marking program.
The binocular stereoscopic vision experimental platform consists of a laser marking machine, a true color binocular camera and a computer. The software adopts simulation to calculate, and comprises the following four modules: the system comprises a binocular camera calibration module, an image segmentation module, a stereo matching and depth information extraction module and a User Graphical Interface (GUI) module. The binocular camera calibration module determines camera parameters, the image segmentation module extracts the boundary of a marked object, the stereo matching and depth information extraction module extracts depth information of an image, and the GUI module realizes man-machine interaction.
The binocular camera is fixed in the top of laser head, and the base line midpoint of binocular camera is in the position of the axis position binocular camera level of laser head and is fixed in the planar position of parallel workstation, is parallel with the laser head horizontal direction promptly.
A binocular vision ranging model is constructed based on a computer vision theory, and relevant parameters of the model are determined by adopting a Zhang-Zhengyou calibration algorithm.
The level set image segmentation method based on the active contour model constructs a gradient descent flow by using a variational principle, solves an energy functional minimum value point, and iteratively determines a convergence rule.
S1: a level set equation is used for representing a specific evolution mode of the curve;
level set function Φ:
Figure BDA0003137691670000071
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: x takes positive when outside the closed curve and negative when inside the closed curve.
The method for representing curve evolution by using the high-dimensional level set function is as follows: assuming an implicitly represented closed curve C, which is a zero level set C { (x, y), Φ (x, y, t) ═ C } of a time-varying high-dimensional level set function Φ (x, y, t), when Φ has a regularity,
Figure BDA0003137691670000081
the evolution of the curve expressed by the basic equation of the level set can be obtained:
Figure BDA0003137691670000082
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C0Given the known situation, the closed curve evolution can be represented by a level set elementary equation, corresponding to a given known initial condition Φ0Then, the level set function Φ is solved, and the zero level set curve where Φ (x, y, t) is 0 is satisfied at any time t.
S2: and solving the minimum value of the energy functional by using the gradient descending flow, and finally finishing the convergence of the evolution curve by using an iteration result.
In the one-dimensional case, the fixed endpoint condition u (x) is satisfied0)=a,u(x1) The function of b may be referred to as functional form:
Figure BDA0003137691670000083
after mathematical transformation, the gradient descending flow form is obtained:
Figure BDA0003137691670000084
and performing global matching by adopting a dynamic programming method, and calculating the similarity measure function by adopting an SSD operator.
And designing a GUI (graphical user interface) based on a user graphic development interface to realize human-computer interaction.
The invention discloses a binocular stereo vision based vision ranging signal processing method for an automatic focusing system of a laser marking machine, which can finish automatic focusing work of an undefined marking workpiece. The method comprises the following steps: (1) and (4) carrying out left and right camera image acquisition on the target object by using a binocular camera. (2) And calibrating the binocular camera by using a calibration box to obtain the internal and external parameters of the camera. (3) And segmenting the acquired image based on an active contour model and a level set method, and extracting the boundary of the marked object. (4) And extracting depth information by using a binocular vision stereo matching principle. (5) And constructing a user graphical interface to realize human-computer interaction. The method provided by the invention can be used for minimizing the energy function in the evolution curve model instead of gray level change when the boundary information of the target object is acquired, and can be used for processing the phenomena of edge blurring, gray level unevenness and the like of the image caused by noise, field offset effect and other factors, thereby reducing the difficulty of image identification and improving the accuracy of the whole algorithm. The automatic focusing method of the laser marking machine has universality, can adapt to most kinds of marked objects, and can solve the problem that the existing automatic focusing algorithm cannot adapt to the marked objects due to different sizes, positions and surface attributes of the marked objects.

Claims (3)

1. A visual ranging signal processing method of an automatic focusing system of a laser marking machine is characterized by comprising the following steps:
s1: calibrating the camera by adopting a camera calibration algorithm to obtain internal and external parameters of the camera and obtain a binocular image of the marked object;
s2: extracting the edge contour of the marked object by adopting an image segmentation algorithm;
s3: stereo matching is carried out by utilizing binocular images to obtain image parallax;
s4: calculating to obtain the depth of an angular point by utilizing the acquired image parallax through a similar triangle principle, and completing the distance test of the marked object;
s5: and utilizing the GUI control to complete the interaction between the user and the program.
2. The visual ranging signal processing method of the automatic focusing system of the laser marking machine according to claim 1, characterized in that a level set image segmentation method based on an active contour model constructs a gradient descent flow by using a variational principle, solves an energy functional minimum point, and iteratively determines a convergence rule, wherein the step S2 specifically comprises:
s2-1: a level set equation is used for representing a specific evolution mode of the curve;
level set function Φ:
Figure FDA0003137691660000011
the distance from the pixel point to the evolution curve C is dist (x, C), and the sign is determined by the pixel point and the curve position: x takes positive when outside the closed curve and negative when inside the closed curve.
The method for representing curve evolution by using the high-dimensional level set function is as follows: assuming a implicitly represented closed curve C, which is a zero level set C { (x, y), Φ (x, y, t) ═ C } of a time-varying high-dimensional level set function Φ (x, y, t), when Φ has regularity, # ≠ 0, we can obtain a curve evolution represented by a level set elementary equation:
Figure FDA0003137691660000012
after the level set function evolution equation is embedded into the high-dimensional implicit function, the obtained equation represents the specific evolution mode of the curve under the initial condition C0Given the known situation, the closed curve evolution can be represented by a level set elementary equation, corresponding to a given known initial condition Φ0Then, the level set function Φ is solved, and the zero level set curve where Φ (x, y, t) is 0 is satisfied at any time t.
S2-2: and solving the minimum value of the energy functional by using the gradient descending flow, and finally finishing the convergence of the evolution curve by using an iteration result.
In the one-dimensional case, the fixed endpoint condition u (x) is satisfied0)=a,u(x1) The function of b may be referred to as functional form:
Figure FDA0003137691660000013
after mathematical transformation, the gradient descending flow form is obtained:
Figure FDA0003137691660000021
3. the method for processing the visual ranging signal of the automatic focusing system of the laser marking machine according to claim 1, wherein the step S3 specifically comprises:
s3-1: using a dotted feature descriptor as a matching primitive;
s3-2: the SSD measurement function is adopted as a similarity measurement function to complete corresponding point matching;
s3-3: and performing global matching by using a dynamic programming method to obtain a final matching result of the matching point set, namely obtaining the matched image parallax.
CN202110723762.4A 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine Active CN113487679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110723762.4A CN113487679B (en) 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110723762.4A CN113487679B (en) 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine

Publications (2)

Publication Number Publication Date
CN113487679A true CN113487679A (en) 2021-10-08
CN113487679B CN113487679B (en) 2023-01-03

Family

ID=77936185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110723762.4A Active CN113487679B (en) 2021-06-29 2021-06-29 Visual ranging signal processing method for automatic focusing system of laser marking machine

Country Status (1)

Country Link
CN (1) CN113487679B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211940A1 (en) * 2005-11-14 2007-09-13 Oliver Fluck Method and system for interactive image segmentation
US20080112617A1 (en) * 2006-11-14 2008-05-15 Siemens Corporate Research, Inc. Method and System for Image Segmentation by Evolving Radial Basis functions
US20130287291A1 (en) * 2012-04-26 2013-10-31 Electronics And Telecommunications Research Institute Method of processing disparity space image
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN107301664A (en) * 2017-05-25 2017-10-27 天津大学 Improvement sectional perspective matching process based on similarity measure function
CN108629812A (en) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 A kind of distance measuring method based on binocular camera
CN108714741A (en) * 2018-04-11 2018-10-30 哈尔滨工程大学 A kind of automatic focusing portable laser marking machine
CN109523528A (en) * 2018-11-12 2019-03-26 西安交通大学 A kind of transmission line of electricity extracting method based on unmanned plane binocular vision SGC algorithm
WO2019100933A1 (en) * 2017-11-21 2019-05-31 蒋晶 Method, device and system for three-dimensional measurement
CN110052704A (en) * 2019-05-21 2019-07-26 哈尔滨工程大学 A kind of worktable of laser marking machine of pair of mark workpiece automatic positioning focusing
CN110874572A (en) * 2019-10-29 2020-03-10 北京海益同展信息科技有限公司 Information detection method and device and storage medium
CN111709985A (en) * 2020-06-10 2020-09-25 大连海事大学 Underwater target ranging method based on binocular vision
CN112862834A (en) * 2021-01-14 2021-05-28 江南大学 Image segmentation method based on visual salient region and active contour
CN114187246A (en) * 2021-11-29 2022-03-15 哈尔滨工程大学 Focal length measuring method of laser marking machine

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211940A1 (en) * 2005-11-14 2007-09-13 Oliver Fluck Method and system for interactive image segmentation
US20080112617A1 (en) * 2006-11-14 2008-05-15 Siemens Corporate Research, Inc. Method and System for Image Segmentation by Evolving Radial Basis functions
US20130287291A1 (en) * 2012-04-26 2013-10-31 Electronics And Telecommunications Research Institute Method of processing disparity space image
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN107301664A (en) * 2017-05-25 2017-10-27 天津大学 Improvement sectional perspective matching process based on similarity measure function
WO2019100933A1 (en) * 2017-11-21 2019-05-31 蒋晶 Method, device and system for three-dimensional measurement
CN108714741A (en) * 2018-04-11 2018-10-30 哈尔滨工程大学 A kind of automatic focusing portable laser marking machine
CN108629812A (en) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 A kind of distance measuring method based on binocular camera
CN109523528A (en) * 2018-11-12 2019-03-26 西安交通大学 A kind of transmission line of electricity extracting method based on unmanned plane binocular vision SGC algorithm
CN110052704A (en) * 2019-05-21 2019-07-26 哈尔滨工程大学 A kind of worktable of laser marking machine of pair of mark workpiece automatic positioning focusing
CN110874572A (en) * 2019-10-29 2020-03-10 北京海益同展信息科技有限公司 Information detection method and device and storage medium
CN111709985A (en) * 2020-06-10 2020-09-25 大连海事大学 Underwater target ranging method based on binocular vision
CN112862834A (en) * 2021-01-14 2021-05-28 江南大学 Image segmentation method based on visual salient region and active contour
CN114187246A (en) * 2021-11-29 2022-03-15 哈尔滨工程大学 Focal length measuring method of laser marking machine

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
SUNLINJU: "主动轮廓模型", 《HTTPS://BLOG.CSDN.NET/SUNLINJU/ARTICLE/DETAILS/52999872》 *
Y. EBRAHIMDOOST等: "Medical Image Segmentation Using Active Contours and a Level Set Model: Application to Pulmonary Embolism (PE) Segmentation", 《2010 FOURTH INTERNATIONAL CONFERENCE ON DIGITAL SOCIETY》 *
夜雨飘零1: "双目摄像头测量距离", 《HTTPS://BLOG.CSDN.NET/QQ_33200967/ARTICLE/DETAILS/106019634》 *
孙怡等: "双目视差测距中的图像配准技术研究", 《物联网技术》 *
孙怡等: "双目视差测距中的立体匹配技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
孙晶华等: "提高水下激光成像衬度的方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *
林维诗: "基于主动轮廓模型和水平集方法的图像分割", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
罗红根等: "基于主动轮廓模型和水平集方法的图像分割技术", 《中国图象图形学报》 *

Also Published As

Publication number Publication date
CN113487679B (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN111783820B (en) Image labeling method and device
JP4785880B2 (en) System and method for 3D object recognition
CN112785643A (en) Indoor wall corner two-dimensional semantic map construction method based on robot platform
JP4677536B1 (en) 3D object recognition apparatus and 3D object recognition method
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN111523547B (en) 3D semantic segmentation method and terminal
CN111623942B (en) Displacement measurement method for test structure model of unidirectional vibration table
CN205451195U (en) Real -time three -dimensional some cloud system that rebuilds based on many cameras
CN114022542A (en) Three-dimensional reconstruction-based 3D database manufacturing method
Intwala et al. A review on process of 3d model reconstruction
CN113393503A (en) Classification-driven shape prior deformation category-level object 6D pose estimation method
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN117218192A (en) Weak texture object pose estimation method based on deep learning and synthetic data
CN113487679B (en) Visual ranging signal processing method for automatic focusing system of laser marking machine
Bhakar et al. A review on classifications of tracking systems in augmented reality
CN112365600B (en) Three-dimensional object detection method
CN115861547A (en) Model surface sample line generation method based on projection
CN113129348B (en) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene
CN115601430A (en) Texture-free high-reflection object pose estimation method and system based on key point mapping
CN109377562B (en) Viewpoint planning method for automatic three-dimensional measurement
JP3637416B2 (en) Three-dimensional measurement method, three-dimensional measurement system, image processing apparatus, and computer program
Vo-Le et al. Automatic Method for Measuring Object Size Using 3D Camera
JP2002350131A (en) Calibration method for and apparatus of multiocular camera and computer program
CN114719759B (en) Object surface perimeter and area measurement method based on SLAM algorithm and image instance segmentation technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant