CN116721094B - Visual pattern recognition method and system under complex background and laser cutting machine - Google Patents

Visual pattern recognition method and system under complex background and laser cutting machine Download PDF

Info

Publication number
CN116721094B
CN116721094B CN202310968638.3A CN202310968638A CN116721094B CN 116721094 B CN116721094 B CN 116721094B CN 202310968638 A CN202310968638 A CN 202310968638A CN 116721094 B CN116721094 B CN 116721094B
Authority
CN
China
Prior art keywords
image
digital
workpiece
matching
pattern recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310968638.3A
Other languages
Chinese (zh)
Other versions
CN116721094A (en
Inventor
于飞
彭利
陈晓毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Bodor Laser Co Ltd
Original Assignee
Jinan Bodor Laser Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Bodor Laser Co Ltd filed Critical Jinan Bodor Laser Co Ltd
Priority to CN202310968638.3A priority Critical patent/CN116721094B/en
Publication of CN116721094A publication Critical patent/CN116721094A/en
Application granted granted Critical
Publication of CN116721094B publication Critical patent/CN116721094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a visual pattern recognition method, a visual pattern recognition system and a laser cutting machine under a complex background, which belong to the technical field of laser cutting, wherein a workpiece image is shot and sent to an industrial personal computer, and the industrial personal computer compresses and stores a digital original image as persistent data; preprocessing the lasting digital original image; the method comprises the steps of eliminating noise of a digital image after pretreatment, establishing convolution parameters of the digital image, carrying out layering treatment, and eliminating and filtering each layer of background texture in the digital image; selecting a region meeting the matching degree by adopting punishment operation between the template information and the image data for marking; and carrying out vector operation on the marked image data and the digital original image, determining the maximum distribution of the support vector, and outputting the identification result if the distribution meets the workpiece processing parameters. The method can reduce the influence of the machine tool body background on the workpiece machining process, improve the recognition quality and facilitate the production and the operation of field personnel.

Description

Visual pattern recognition method and system under complex background and laser cutting machine
Technical Field
The invention belongs to the technical field of laser cutting, and particularly relates to a visual pattern recognition method and system under a complex background and a laser cutting machine.
Background
Before laser cutting, cutting lines are designed in advance by a plotter according to the size and the style of the plate to be processed, the cutting lines are led into a cutting machine tool before processing, then the cutting lines are manually located to the processing starting point of the plate, and finally processing operation is carried out. The operation process has complicated steps and large manual operation amount.
There is also a method for automatically executing cutting lines, for example, the prior art discloses a visual edge searching and positioning method for a laser cutting machine, and the steps of the document include: step S1, placing a cut plate on a workbench, and simultaneously ensuring that the cut plate is positioned in the shooting range of a camera through an image acquisition device electrically connected with the camera; s2, self-checking by a camera; step S3, removing the background from the image acquired by the camera in the step S1; step S4, assigning coordinate parameters to the image in the step S3; s5, calculating the coordinate parameters of the cut plate obtained in the step S4 by the controller to obtain an origin coordinate and a rotation angle; s6, the controller controls the workpiece coordinate system of the laser cutting machine to rotate; and S7, the controller controls the cutting head to move to the origin of the coordinate system of the cut plate, so that the file can realize the accurate positioning of the cut plate, and the automatic cutting based on the cutting line is realized. However, due to the fact that the existing cutting environments are complex, a large amount of external interference factors exist in the acquired images in the visual processing process, the accuracy and precision of the laser cutting machine based on visual edge finding and positioning are affected, the cutting process requirements cannot be met, and the cutting quality is reduced.
Disclosure of Invention
The invention provides a visual pattern recognition method under a complex background, which can improve the accuracy and the precision of visual pattern recognition, meet the requirements of a cutting process and improve the processing efficiency of an automatic assembly line.
The visual pattern recognition method under the complex background comprises the following steps:
s101: shooting a workpiece image, sending the workpiece image to an industrial personal computer, converting the workpiece image into a digital original image by the industrial personal computer, and compressing and storing the digital original image as persistent data;
s102: preprocessing the lasting digital original image;
s103: eliminating the noise of the preprocessed digital image, establishing convolution parameters of the digital image, layering, and eliminating and filtering each layer of background texture in the digital image;
s104: verifying and matching the layered image data of each region by utilizing square difference, and selecting a region meeting the matching degree by adopting punishment operation between template information and the image data for marking;
s105: carrying out vector operation on the marked image data and the digital original image, fusing all pixels in the marked area of the original image, determining the maximum distribution of the support vector, and outputting the identification result if the distribution meets the workpiece processing parameters.
In step S101, the picked-up workpiece images are stored in a buffer at the front end of the camera, and the workpiece images are sent to the industrial personal computer according to the picking-up sequence based on preset sending conditions;
after receiving a complete workpiece image, the industrial personal computer feeds back the completion of receiving the current workpiece image to the camera;
the industrial personal computer starts an imaging callback processing process, temporarily stores the workpiece image, converts the workpiece image into a digital original image according to a preset instruction, and compresses and stores the digital original image as persistent data.
It should be further noted that, in step S102, the preprocessing operation manner includes:
any distortion point in the digital original image is selected on the imager, and the distribution position of the distortion point in the radial direction is adjusted by the following formula.
Wherein: (x 0, y 0) is the original position of the distortion point on the imager, and (x, y) is the distortionThe newer position, r, is the radius, k, from the point centered on the optical axis 1 Is a first order radial distortion coefficient, k 2 Is the second order radial distortion coefficient, k 3 Is a third order radial distortion coefficient.
In tangential distortion, a distortion model parameter p 1 And p 2 To describe the process of the present invention,
it should be further noted that, in step S103, when the convolution is used for calculation, the center position of the convolution kernel is set on the digital image to be calculated, the product of the pixel values of the digital image of each element in the convolution kernel and the coverage position is calculated, and summed, so as to obtain a new pixel value of the digital image of the coverage position.
It should be further noted that, in step S103, the method for eliminating and filtering the background texture of each layer in the digital image includes: based on the following equations respectively in combination with the filter,
in the filter, D 0 Is a reasonable constant, D (u, v) is the distance between a point (u, v) in the frequency domain and the rectangular center of the frequency domain, given a radius value D 0 As a threshold value; the filter is equal to 1 at a threshold H (u, v) above which H (u, v) is equal to 0.
It should be further noted that, in step S103, the boundary between the unfiltered spectrum and the filtered spectrum in the digital image is further smoothed based on the following formula,
it should be further noted that, step S103 further performs fuzzy search on the digital image, establishes a preset number of identification areas, and performs calibration by using the morphological differences of the image data of each area.
It should be further noted that, the template adopted in step S104 is an NCC template, and dx, dy and operator gradients are obtained based on the Sobel gradient operator.
Wherein,a means picture element, < >>Meaning X-directional gradient;
meaning Y-gradient; according to the X-direction and Y-direction gradients of the image, the total gradient result is calculated as follows:
the meaning of G is the total gradient,meaning square of the X-direction gradient, +.>Meaning the square of the Y-direction gradient.
The method further comprises the steps of obtaining an edge image through a Canny algorithm, obtaining all contour point sets based on contour discovery, calculating three values of dx, dy and operator gradient (dxy) of each point based on each point in the contour point sets, and generating template information;
matching the template information with the image data based on a gradient matching algorithm to offset tiny pixel migration appearing on the image data;
in the matching process, a threshold value is set, if the sum value of any point is smaller than the threshold value, the matching process is stopped and the next point is continued.
The invention also provides a visual pattern recognition system under a complex background, which comprises: the device comprises an image acquisition module, an industrial personal computer, an image processing module, a filtering processing module, a verification matching module and a result output module;
the image acquisition module is used for shooting a workpiece image and sending the workpiece image to the industrial personal computer, and the industrial personal computer converts the workpiece image into a digital original image and compresses and stores the digital original image into persistent data;
the image processing module is used for preprocessing the lasting digital original image;
the filtering processing module is used for eliminating the noise of the digital image after the preprocessing, establishing convolution parameters of the digital image, carrying out layering processing and eliminating and filtering each layer of background texture in the digital image;
the verification matching module is used for verifying and matching the layered image data of each region by utilizing square difference, and selecting the region meeting the matching degree by adopting punishment operation between the template information and the image data for marking;
the result output module is used for carrying out vector operation on the marked image data and the digital original image, fusing all pixels in the marked area of the original image, determining the maximum distribution of the support vector, and outputting the identification result if the distribution meets the workpiece processing parameters.
The invention also provides a laser cutting machine, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the visual pattern recognition method under the complex background when executing the program.
From the above technical scheme, the invention has the following advantages:
the invention can remove distortion of image data through distortion correction, then eliminate deviation to obtain an image which can be used for visual software processing, finally, the image is processed through the visual software, the processing algorithm can adjust the information input by the algorithm according to the parameter configuration information and is used for processing and calculating, thus obtaining the needed image data. The invention can reduce the influence of the machine tool body background on the workpiece processing process, improve the recognition quality and is easy for production and field personnel operation. The process parameters of the invention can be adjusted in time according to the needs, and the visual pattern recognition process has certain flexibility, stronger redundancy and good interactivity. The identified visual image can be applied to a laser cutting process.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a visual pattern recognition method in a complex background;
FIG. 2 is a schematic diagram of an embodiment of the overlay layering of the present invention;
fig. 3 is a schematic diagram of another embodiment of the present invention.
Detailed Description
The visual pattern recognition method under the complex background is applied to a laser cutting machine to perform visual pattern recognition and output processing on the workpiece, adopts a visual system, can automatically recognize the shape, the fixed position, the processing position and the like of the workpiece through a corresponding image processing algorithm, outputs corresponding image data according to an actual processing technology, assists in positioning a processing starting point of the workpiece during processing and positioning the workpiece during processing, can realize less artificial dry weight, realizes an automatic recognition mode, and is easy to realize the processing process of the automatic laser cutting machine.
The visual pattern recognition method under the complex background provided by the invention can also acquire and process the associated data based on the artificial intelligence technology. The visual pattern recognition method under the complex background can utilize the machine simulation controlled by the industrial personal computer, extend and expand the intelligence of people, sense the environment, acquire knowledge and acquire the theory, method, technology and application device of the best result by using the knowledge in order to realize the corresponding effect. The method is combined with the technology including an artificial neural network, an image distortion analysis mathematical model, an image filtering processing model, an NCC template matching model corresponding to an NCC algorithm, a vector type mask operation model, a confidence network model, a generalization learning model and the like. By establishing a corresponding model algorithm, the visual pattern recognition under a complex background is realized by utilizing technologies such as sensor monitoring, data transmission and the like, so that the visual pattern processing process of the laser cutting process is assisted, and the real-time state of the laser cutting process can be effectively recognized. Interaction and analysis of the workpiece and the processing technological parameters based on physical space are realized, and the quality and the cutting efficiency of laser cutting are effectively improved.
The visual pattern recognition method in the complex background provided by the invention can be written into an industrial personal computer or a laser cutting machine based on programming languages including, but not limited to, object-oriented programming languages such as Java, smalltalk, C ++, and conventional procedural programming languages such as "C" or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer.
The industrial personal computer may be any electronic product that can interact with a user, such as a personal computer, a tablet computer, a smart phone, a personal digital assistant (PersonalDigitalAssistant, PDA), an interactive internet protocol television (InternetProtocolTelevision, IPTV), etc.
The industrial personal computer and the laser cutter may include network devices and/or user devices. The network where the industrial personal computer and the laser cutting machine are located in the operation process comprises, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (VirtualPrivateNetwork, VPN) and the like.
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of a visual pattern recognition method in a complex background according to an embodiment is shown, where the method includes:
s101: shooting a workpiece image, sending the workpiece image to an industrial personal computer, converting the workpiece image into a digital original image by the industrial personal computer, and compressing and storing the digital original image as persistent data;
according to the embodiment of the application, the image of the workpiece can be shot through the camera installed on the laser cutting machine, and the workpiece is installed on the processing machine table or arranged in the processing area, so that the background of the workpiece is not uniform, other objects or objects can be doped sometimes, and the recognition of visual patterns can be affected. The present embodiment is described based on the recognition quality in the case of lifting complexity.
In order to improve the image shooting effect, the camera generates digital information through refraction and photoelectric conversion of the imaging device in the shooting process, and the digital information is stored in a camera cache to send signals to the industrial personal computer at proper time. The control instruction can be sent to the camera based on the industrial personal computer to acquire the corresponding workpiece image, or the workpiece image can be sent to the industrial personal computer periodically and regularly based on the control instruction preset by the system.
For example, another way may be that the industrial personal computer receives the trigger signal of the camera, establishes a buffer area, and then informs the camera to grant transmission data, and the camera performs orderly transmission according to the shooting time sequence and based on the codes of the shot images. In the image transmission process, based on frame data transmission, after the industrial personal computer receives the last frame data of an image, the industrial personal computer feeds back the receiving completion state to the camera. The industrial personal computer temporarily stores the workpiece image, converts the workpiece image into a digital original image according to a user instruction, and compresses and stores the digital original image as persistent data.
S102: preprocessing the lasting digital original image;
in this embodiment, the size integration of the persistent digital original image is performed, which is to adapt to the computing power of the industrial personal computer and improve the image processing efficiency. The processing mode is to extract distortion internal reference configuration, load the configuration into algorithm parameters, and carry out de-distortion correction on the digital original image, wherein the distortion internal reference configuration comprises radial distortion and tangential distortion.
In the radial distortion, the distortion of the optical axis center of the imager is 0, and the distortion becomes more serious when the optical axis center moves to the edge along the radial direction of the lens. The mathematical model of the distortion may use a principal point, where the principal point is the point centered on the optical axis, with a distortion of 0. While the first few terms of the taylor series expansion around the principal point are described, the first two terms, k1 and k2, are typically used.
The third term k3 is added to describe this embodiment, and according to the distribution position of a certain point on the imager in the radial direction, the adjustment formula is as follows:
wherein: (x 0, y 0) is the original position of the distortion point on the imager, which may be the position of the point on the original picture. (x, y) is a new position after distortion, where (x 0, y 0) corresponds to the position of the distorted point. r is a radius with a point at the center of the optical axis as the origin. The farther from the optical center, the larger the radial displacement, which means the larger the distortion, and almost no offset near the optical center.
In tangential distortion, a distortion model parameter p 1 And p 2 To describe:
and then, a calibration algorithm commonly used in the laser cutting field can be called to map the sizes of the image and the real environment, so that the method is fully prepared for subsequent processing.
S103: and eliminating the noise of the digital image after the pretreatment, establishing convolution parameters of the digital image, layering, and eliminating and filtering each layer of background texture in the digital image.
In this embodiment, the elimination of background texture may be performed based on the X-direction and Y-direction in the digital image. It will be appreciated that two-dimensional coordinates or three-dimensional coordinates are configured in the system, and background textures in the digital image are eliminated based on the X direction and the Y direction of the two-dimensional coordinates or the three-dimensional coordinates, and the elimination process can be performed over the whole breadth.
When convolution is used for calculation, the center of the convolution kernel needs to be placed on the pixel to be calculated, and the products of each pixel in the digital image and the pixel values of the image covered by the pixels are calculated and summed, so that the obtained structure is the new pixel value of the position.
The present embodiment can define:the meaning of g is the convolved image, the meaning of f is the original image, and the meaning of h is the convolution kernel, i.e.)>
Wherein,meaning of (c) is a convolved image, meaning of k is an original convolved transverse increment,lmeaning that the artwork convolves a longitudinal increment.
As a processing manner of the present invention, as shown in fig. 2, the calculation manner of each pixel in the digital image and the pixel value of the image covered by the pixel may include the following steps:
(11) Performing sliding processing on the convolution kernel so that the center of the convolution kernel is positioned on the (i, j) pixel of the input image g;
(12) Summing by using the above formula to obtain a (i, j) pixel value of the output image;
(13) The above operations are repeated until all pixel values of the output image are found.
In yet another embodiment of the present invention, it is possible to define:i.e.
The specific implementation steps are as follows, as shown in the introduction figure 3: (21) rotating the convolution kernel 180 degrees about the center of the kernel itself;
(22) Performing sliding processing on the convolution kernel so that the center of the convolution kernel is positioned on the (i, j) pixel of the input image g;
(23) Summing by using the above formula to obtain a (i, j) pixel value of the output image;
(24) The above operations are repeated until all pixel values of the output image are found.
According to the embodiment of the application, after removing textures of each layer of image, superposition filtering is performed to restore the digital image.
The embodiment also aims to eliminate convolution influence factors, and does not influence later recognition. The superposition filtering process is as follows: using D 0 Low pass filter sum d=50 0 The high-pass filter of=50 performs filtering processing, and the following formula is D 0 A low-pass filter of =50,
in the low pass filter, D 0 Is a reasonable constant, D (u, v) is the distance between a point (u, v) in the frequency domain and the rectangular center of the frequency domain, given a radius value D 0 As a threshold value, the low-pass filter map (g) (1) is equal to 1 at a threshold value H (u, v) and is equal to 0 above the threshold value.
And the high pass filter is equal to 0 at a preset threshold above which H (u, v) is equal to 1.
In this embodiment, after the above filtering process, the boundary is made inconspicuous, and the image has continuity, where the unfiltered frequency and the filtered frequency can be used as boundaries. The filter introduces a parameter n into the function. When operating on parameter n, a sharpness adjustment of the boundary between unfiltered and filtered frequencies can be achieved.
In particular by means of the following formula,
in this embodiment, the recognition processing may be to perform fuzzy search on the digital image, establish five recognition areas, fill each area respectively, perform algorithm coverage on each area, and then perform calibration on the actual graph by using the morphological differences.
S104: verifying and matching the layered image data of each region by utilizing square difference, and selecting a region meeting the matching degree by adopting punishment operation between template information and the image data for marking;
in this embodiment, after the five recognition areas are established, the system may perform traversal tracking, perform matching verification on the five recognition areas by using element members, verify the similarity of edges, perform optimization according to the similarity result, and mark the closest and full area in the similarity surrounding area.
Therefore, the embodiment realizes NCC template matching based on gradient level based on image gradient, and obtains dx, dy and operator gradients based on Sobel gradient operators.
Wherein, the gradient of the X direction and the Y direction of the image is:
the meaning of a is an image element,meaning X-direction gradient, < >>Meaning a Y-gradient.
According to the X-direction gradient and the Y-direction gradient, finally, calculating the total gradient:
the meaning of G is the total gradient,meaning square of the X-direction gradient, +.>Meaning the square of the Y-direction gradient.
The edge image is obtained by the Canny algorithm, all the contour point sets are obtained based on contour discovery, and three values of dx and dy and the size of the contour point are calculated based on each point. Based on the gradient matching algorithm, tiny pixel migration occurring on the target image can be counteracted.
The algorithm of the NCC template is calculated based on the following formula,
during the matching process, the NCC template output should approach a given threshold at any one point. If the sum match is less than the threshold at any point, the match stops and continues from the next point.
The matching method of the embodiment also involves the following steps, and the obtained matrix is set asThe template image matrix is->The source image matrix is +.>. In the present method, a matching method is used>Specifically, <' > a->
I.e. the sum of squares of the differences of template image pixels minus covered source image pixels is the value of the point of the corresponding matrix.
The closer the value is to 0, the higher the degree of matching is explained.
In this embodiment, the variance is used to match, and if the match is worse, the match value is larger,
the punishment operation between the template and the image is adopted, so that a larger number indicates a higher matching degree, and 0 indicates the worst matching effect;
in this embodiment, a correlation coefficient matching method may also be used
S105: carrying out vector operation on the marked image data and the digital original image, fusing all pixels in the marked area of the original image, determining the maximum distribution of the support vector, and outputting the identification result if the distribution meets the workpiece processing parameters.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In this way, the invention can remove distortion of the image data through distortion correction, then eliminate deviation to obtain an image which can be used for visual software processing, finally, the image is processed through the visual software, the processing algorithm can adjust the information input by the algorithm according to the parameter configuration information and is used for processing calculation, thus obtaining the needed image data. The invention can reduce the influence of the machine tool body background on the workpiece processing process, improve the recognition quality and is easy for production and field personnel operation. The process parameters of the invention can be adjusted in time according to the needs, and the visual pattern recognition process has certain flexibility, stronger redundancy and good interactivity. The identified visual image can be applied to a laser cutting process.
The following is an embodiment of a visual pattern recognition system in a complex background provided by an embodiment of the present disclosure, which belongs to the same inventive concept as the visual pattern recognition method in a complex background of the above embodiments, and details which are not described in detail in the embodiment of the visual pattern recognition system in a complex background may refer to the embodiment of the visual pattern recognition method in a complex background.
The system comprises: the device comprises an image acquisition module, an industrial personal computer, an image processing module, a filtering processing module, a verification matching module and a result output module;
the image acquisition module is used for shooting a workpiece image and sending the workpiece image to the industrial personal computer, and the industrial personal computer converts the workpiece image into a digital original image and compresses and stores the digital original image into persistent data;
the image processing module is used for preprocessing the lasting digital original image;
the filtering processing module is used for eliminating the noise of the digital image after the preprocessing, establishing convolution parameters of the digital image, carrying out layering processing and eliminating and filtering each layer of background texture in the digital image;
the verification matching module is used for verifying and matching the layered image data of each region by utilizing square difference, and selecting the region meeting the matching degree by adopting punishment operation between the template information and the image data for marking;
the result output module is used for carrying out vector operation on the marked image data and the digital original image, fusing all pixels in the marked area of the original image, determining the maximum distribution of the support vector, and outputting the identification result if the distribution meets the workpiece processing parameters.
The visual pattern recognition system under the complex background realizes visual pattern recognition under the complex background by establishing a corresponding model algorithm and utilizing the technologies of sensor monitoring, data transmission and the like, thereby assisting the visual pattern processing process of the laser cutting process and effectively recognizing the real-time state of the laser cutting process. Interaction and analysis of the workpiece and the processing technological parameters based on physical space are realized, and the quality and the cutting efficiency of laser cutting are effectively improved.
The visual pattern recognition system in the complex context of the present invention is the unit and algorithm steps of the examples described in connection with the embodiments disclosed herein, and can be implemented in electronic hardware, computer software, or a combination of both, the components and steps of each example having been generally described functionally in the foregoing description for clarity of the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
From the above description of the embodiments, it will be readily understood by those skilled in the art that the visual pattern recognition method described herein in the present complex background may be implemented by software, or may be implemented by a combination of software and necessary hardware. Accordingly, aspects of the disclosed embodiments of visual pattern recognition methods according to complex contexts may be embodied in the form of a software product that may be stored on or in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), including instructions to cause a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform an indexing method according to embodiments of the disclosure.
Those skilled in the art will appreciate that aspects of the visual pattern recognition method in a complex context may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A visual pattern recognition method in a complex background, the method comprising:
s101: shooting a workpiece image, sending the workpiece image to an industrial personal computer, converting the workpiece image into a digital original image by the industrial personal computer, and compressing and storing the digital original image as persistent data;
s102: preprocessing the lasting digital original image;
in step S102, the preprocessing operation mode includes:
selecting any distortion point in a digital original image on an imager, and adjusting the distribution position of the distortion point in the radial direction;
in tangential distortion, parameter p describing the distortion model 1 And p 2
S103: eliminating the noise of the preprocessed digital image, establishing convolution parameters of the digital image, layering, and eliminating and filtering each layer of background texture in the digital image;
s104: verifying and matching the layered image data of each region by utilizing square difference, and selecting a region meeting the matching degree by adopting punishment operation between template information and the image data for marking;
in the method, the obtained matrix is set asThe template image matrix is->The source image matrix is +.>
Using matching methodsI.e. the sum of squares of the differences of template image pixels minus covered source image pixels is the value of the point of the corresponding matrix;
and then the square difference is used for matching, if the matching is worse, the matching value is larger,
a punishment operation between the template and the image is adopted, a larger number indicates a higher matching degree, and 0 indicates the worst matching effect;
s105: carrying out vector operation on the marked image data and the digital original image, fusing all pixels in the marked area of the original image, determining the maximum distribution of the support vector, and outputting the identification result if the distribution meets the workpiece processing parameters.
2. The method for visual pattern recognition in a complex background according to claim 1, wherein,
in step S101, the picked-up workpiece images are stored in a buffer memory at the front end of the camera, and the workpiece images are sent to the industrial personal computer according to the picking-up sequence based on preset sending conditions;
after receiving a complete workpiece image, the industrial personal computer feeds back the completion of receiving the current workpiece image to the camera;
the industrial personal computer starts an imaging callback processing process, temporarily stores the workpiece image, converts the workpiece image into a digital original image according to a preset instruction, and compresses and stores the digital original image as persistent data.
3. The method for visual pattern recognition in a complex background according to claim 1 or 2, wherein,
in step S102, the preprocessing operation mode includes:
selecting any distortion point in the digital original image on an imager, adjusting the distribution position of the distortion point in the radial direction by the following formula,
wherein: (x 0, y 0) is the original position of the distortion point on the imager, (x, y) is the new position after distortion, r is the radius with the point of the optical axis center as the origin, k 1 Is a first order radial distortion coefficient, k 2 Is the second order radial distortion coefficient, k 3 Is a third-order radial distortion coefficient;
in tangential distortion, a distortion model parameter p 1 And p 2 To describe:
4. the visual pattern recognition method according to claim 1 or 2, wherein in step S103, when the convolution is used for calculation, the center position of the convolution kernel is set on the digital image to be calculated, the product of each element in the convolution kernel and the digital image pixel value of the overlay position is calculated, and the products are summed to obtain a new pixel value of the digital image of the overlay position.
5. The method for visual pattern recognition under a complex background according to claim 4, wherein the removing and filtering the background texture of each layer in the digital image in step S103 comprises: based on the following equations respectively in combination with the filter,
in the filter, D 0 Is a reasonable constant, D (u, v) is the distance between a point (u, v) in the frequency domain and the rectangular center of the frequency domain, given a radius value D 0 As a threshold value;
the filter is equal to 1 at a threshold H (u, v) above which H (u, v) is equal to 0.
6. The method for visual pattern recognition in a complex background according to claim 4,
the boundary between the unfiltered spectrum and the filtered spectrum in the digital image is also smoothed in step S103 based on the following formula,
7. the method for recognizing visual patterns in a complex background according to claim 1, wherein step S103 further performs fuzzy search on the digital image, establishes a predetermined number of recognition areas, and performs calibration using the morphological differences of the image data of each area.
8. The visual pattern recognition method under the complex background according to claim 7, wherein the template adopted in the step S104 is an NCC template, and dx, dy and operator gradients are obtained based on Sobel gradient operators;
wherein,
the meaning of a is an image element,meaning X-directional gradient of (c);
,/>meaning Y-gradient;
the total gradient is calculated according to the X-direction gradient and the Y-direction gradient of the image:
the meaning of G is the total gradient,meaning square of the X-direction gradient, +.>Meaning the square of the Y-direction gradient;
the method further comprises the steps of obtaining an edge image through a Canny algorithm, obtaining all contour point sets based on contour discovery, calculating three values of dx, dy and operator gradient (dxy) of each point based on each point in the contour point sets, and generating template information;
matching the template information with the image data based on a gradient matching algorithm to offset tiny pixel migration appearing on the image data;
in the matching process, a threshold value is set, if the sum value of any point is smaller than the threshold value, the matching process is stopped and the next point is continued.
9. A visual pattern recognition system in a complex background, wherein the system employs the visual pattern recognition method in a complex background according to any one of claims 1 to 8;
the system comprises: the device comprises an image acquisition module, an industrial personal computer, an image processing module, a filtering processing module, a verification matching module and a result output module;
the image acquisition module is used for shooting a workpiece image and sending the workpiece image to the industrial personal computer, and the industrial personal computer converts the workpiece image into a digital original image and compresses and stores the digital original image into persistent data;
the image processing module is used for preprocessing the lasting digital original image;
the filtering processing module is used for eliminating the noise of the digital image after the preprocessing, establishing convolution parameters of the digital image, carrying out layering processing and eliminating and filtering each layer of background texture in the digital image;
the verification matching module is used for verifying and matching the layered image data of each region by utilizing square difference, and selecting the region meeting the matching degree by adopting punishment operation between the template information and the image data for marking;
the result output module is used for carrying out vector operation on the marked image data and the digital original image, fusing all pixels in the marked area of the original image, determining the maximum distribution of the support vector, and outputting the identification result if the distribution meets the workpiece processing parameters.
10. A laser cutting machine comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the visual pattern recognition method in the complex background of any one of claims 1 to 8 when the program is executed.
CN202310968638.3A 2023-08-03 2023-08-03 Visual pattern recognition method and system under complex background and laser cutting machine Active CN116721094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310968638.3A CN116721094B (en) 2023-08-03 2023-08-03 Visual pattern recognition method and system under complex background and laser cutting machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310968638.3A CN116721094B (en) 2023-08-03 2023-08-03 Visual pattern recognition method and system under complex background and laser cutting machine

Publications (2)

Publication Number Publication Date
CN116721094A CN116721094A (en) 2023-09-08
CN116721094B true CN116721094B (en) 2023-12-19

Family

ID=87868218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310968638.3A Active CN116721094B (en) 2023-08-03 2023-08-03 Visual pattern recognition method and system under complex background and laser cutting machine

Country Status (1)

Country Link
CN (1) CN116721094B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909941A (en) * 2017-02-27 2017-06-30 广东工业大学 Multilist character recognition system and method based on machine vision
WO2018103693A1 (en) * 2016-12-07 2018-06-14 西安知象光电科技有限公司 Hybrid light measurement method for measuring three-dimensional profile
CN108875740A (en) * 2018-06-15 2018-11-23 浙江大学 A kind of machine vision cutting method applied to laser cutting machine
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information
CN113395415A (en) * 2021-08-17 2021-09-14 深圳大生活家科技有限公司 Camera data processing method and system based on noise reduction technology
WO2021197341A1 (en) * 2020-04-03 2021-10-07 速度时空信息科技股份有限公司 Monocular image-based method for updating road signs and markings
CN116092086A (en) * 2023-01-11 2023-05-09 上海智能制造功能平台有限公司 Machine tool data panel character extraction and recognition method, system, device and terminal
CN116150644A (en) * 2021-11-18 2023-05-23 青岛国数信息科技有限公司 Radio frequency spectrum signal detection method based on image module matching
CN116194033A (en) * 2020-07-15 2023-05-30 爱尔康公司 Digital image optimization for ophthalmic surgery

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950508B (en) * 2021-03-12 2022-02-11 中国矿业大学(北京) Drainage pipeline video data restoration method based on computer vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018103693A1 (en) * 2016-12-07 2018-06-14 西安知象光电科技有限公司 Hybrid light measurement method for measuring three-dimensional profile
CN106909941A (en) * 2017-02-27 2017-06-30 广东工业大学 Multilist character recognition system and method based on machine vision
CN108875740A (en) * 2018-06-15 2018-11-23 浙江大学 A kind of machine vision cutting method applied to laser cutting machine
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information
WO2021197341A1 (en) * 2020-04-03 2021-10-07 速度时空信息科技股份有限公司 Monocular image-based method for updating road signs and markings
CN116194033A (en) * 2020-07-15 2023-05-30 爱尔康公司 Digital image optimization for ophthalmic surgery
CN113395415A (en) * 2021-08-17 2021-09-14 深圳大生活家科技有限公司 Camera data processing method and system based on noise reduction technology
CN116150644A (en) * 2021-11-18 2023-05-23 青岛国数信息科技有限公司 Radio frequency spectrum signal detection method based on image module matching
CN116092086A (en) * 2023-01-11 2023-05-09 上海智能制造功能平台有限公司 Machine tool data panel character extraction and recognition method, system, device and terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于支持向量回归的小尺寸零件精密测量;贺秋伟;王龙山;刘庆民;李国发;;光学精密工程(第04期);全文 *
基于机器视觉激光切割自动寻边技术研究;钟平;陈益民;张伟;;纺织高校基础科学学报(第04期);全文 *
计算机视觉技术在偏口桶自动灌装中应用;岳晓峰;王平凯;焦圣喜;;长春工业大学学报(自然科学版)(第03期);全文 *

Also Published As

Publication number Publication date
CN116721094A (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN110363858B (en) Three-dimensional face reconstruction method and system
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN106408609B (en) A kind of parallel institution end movement position and posture detection method based on binocular vision
US6819318B1 (en) Method and apparatus for modeling via a three-dimensional image mosaic system
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN110926330B (en) Image processing apparatus, image processing method, and program
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN111627073B (en) Calibration method, calibration device and storage medium based on man-machine interaction
CN109118529A (en) A kind of screw hole Image Quick Orientation method of view-based access control model
CN109934765B (en) High-speed camera panoramic image splicing method
CN116957987A (en) Multi-eye polar line correction method, device, computer equipment and storage medium
CN109191522B (en) Robot displacement correction method and system based on three-dimensional modeling
CN114119987A (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN116721094B (en) Visual pattern recognition method and system under complex background and laser cutting machine
CN113627210A (en) Method and device for generating bar code image, electronic equipment and storage medium
CN112132971B (en) Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium
CN116503567B (en) Intelligent modeling management system based on AI big data
CN111145266B (en) Fisheye camera calibration method and device, fisheye camera and readable storage medium
CN114979464B (en) Industrial camera view angle accurate configuration method and system adaptive to target area
CN111311673A (en) Positioning method and device and storage medium
CN113029109B (en) Method and system for performing space-three encryption by utilizing near-infrared band image
CN117167936A (en) Air conditioner detection method, device and system, storage medium and electronic device
Wang et al. Research On 3D Reconstruction of Face Based on Binocualr Stereo Vision
CN117765174A (en) Three-dimensional reconstruction method, device and equipment based on monocular cradle head camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant