CN114708206A - Method, device, equipment and medium for identifying placing position of autoclave molding tool - Google Patents

Method, device, equipment and medium for identifying placing position of autoclave molding tool Download PDF

Info

Publication number
CN114708206A
CN114708206A CN202210294304.8A CN202210294304A CN114708206A CN 114708206 A CN114708206 A CN 114708206A CN 202210294304 A CN202210294304 A CN 202210294304A CN 114708206 A CN114708206 A CN 114708206A
Authority
CN
China
Prior art keywords
tool
image
platform
coordinate data
workpiece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210294304.8A
Other languages
Chinese (zh)
Inventor
魏士鹏
王宁
袁喆
杨博先
张娜娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Aircraft Industrial Group Co Ltd
Original Assignee
Chengdu Aircraft Industrial Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Aircraft Industrial Group Co Ltd filed Critical Chengdu Aircraft Industrial Group Co Ltd
Priority to CN202210294304.8A priority Critical patent/CN114708206A/en
Publication of CN114708206A publication Critical patent/CN114708206A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0025Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a medium for identifying the placing position of an autoclave molding tool, wherein a workpiece supporting platform is shot to obtain a platform image; the platform image comprises a two-dimensional code arranged on a composite material workpiece, the composite material workpiece is arranged on a tool, the tool is arranged on the workpiece supporting platform, and the two-dimensional code comprises tool information; carrying out tool outline identification on the platform image to obtain tool outline point coordinate data; acquiring tool position data according to the tool contour point coordinate data; cutting the platform image according to the tool outline point coordinate data to obtain a plurality of tool images; identifying the two-dimensional code on the tool image to obtain tool information on the tool image; the tool position data and the tool information are output, and the tool position data recording method and the tool position data recording device have the advantages that the tool position data can be accurately recorded, the manual burden can be saved, and the production efficiency is improved.

Description

Method, device, equipment and medium for identifying placing position of autoclave molding tool
Technical Field
The application relates to the technical field of composite material manufacturing, in particular to a method, a device, equipment and a medium for identifying the placement position of an autoclave molding tool.
Background
The autoclave curing process is a main method for producing composite material components, and a composite material part is heated and pressurized by using high temperature in a tank and pressure generated by compressed gas to complete curing molding. And placing the composite material workpiece on a tool, and then placing the tool on a supporting platform to enter a hot pressing tank for forming and curing. When a large-scale integral composite material part and a plurality of composite material parts are cured and molded, the large-scale integral composite material part and the plurality of composite material parts are influenced by multiple factors such as gas flow in a tank, the structure of a mold, the placement position in the tank, the heat release of a curing reaction of the composite material and the like.
The placing position of the existing tool in the autoclave is recorded in a simple and rough sketch drawing mode on the back of an autoclave paper operation record book by manual operation, and the automatic placing position measuring device is low in efficiency, easy to make mistakes and large in difference with the actual placing position.
Disclosure of Invention
The application mainly aims to provide a method, a device, equipment and a medium for identifying the placing position of an autoclave molding tool, and aims to solve the technical problem that the accuracy of the existing method for identifying the placing position of the tool in an autoclave is low.
In order to achieve the purpose, the application provides a method for identifying the placing position of an autoclave molding tool, which comprises the following steps:
shooting a workpiece supporting platform to obtain a platform image; the platform image comprises a two-dimensional code arranged on a composite material workpiece, the composite material workpiece is arranged on a tool, the tool is arranged on the workpiece supporting platform, and the two-dimensional code comprises tool information;
carrying out tool outline identification on the platform image to obtain tool outline point coordinate data;
acquiring tool position data according to the tool contour point coordinate data;
cutting the platform image according to the tool outline point coordinate data to obtain a plurality of tool images;
identifying the two-dimensional code on the tool image to obtain tool information on the tool image;
and outputting the tool position data and the tool information.
Optionally, the step of capturing the workpiece support platform to obtain a platform image includes:
shooting the workpiece supporting platform through a camera to obtain a platform image; the number of the cameras is determined according to the size of the workpiece supporting platform and the size of the visual field of the cameras.
Optionally, if the number of the cameras is multiple; the step of capturing the workpiece support platform with a camera to obtain a platform image includes:
shooting the workpiece support platform through a plurality of cameras to obtain a plurality of sub-platform images;
and carrying out image splicing on the plurality of sub-platform images to obtain the platform image.
Optionally, the image stitching method includes the following steps:
equally dividing each sub-platform image into 4 equal parts of areas, and respectively solving the area with the maximum normalized cross-correlation coefficient as a similar area for the different areas of each sub-platform image;
selecting a region block with high similarity, and extracting and matching image feature points of the sub-platform image by adopting an SIFT algorithm;
setting the region blocks with higher similarity as A and B respectively, and performing image fusion by using a weighted average method to generate pixel values of each point of a new image so as to obtain a target image;
and returning to the selected region block with high similarity, and performing image feature point extraction and matching on the sub-platform image by adopting an SIFT algorithm to obtain a spliced complete platform image.
Optionally, if the size of each region of the sub-platform image is mxn, the normalized cross-correlation coefficient is:
Figure RE-GDA0003615948620000021
wherein g and f represent two different sub-platform images, respectively, then gij、fijThe pixel values of the sub-platform image g and the sub-platform image f in the ith row and the jth column respectively,
Figure RE-GDA0003615948620000022
respectively representing the average values of the pixels of the corresponding areas;
the finishing formula (1) is as follows:
Figure RE-GDA0003615948620000023
wherein, the first and the second end of the pipe are connected with each other,
Figure RE-GDA0003615948620000024
in the formula, Sgf、Sff、Sgg、SfAnd SgEnergy collectively referred to as images; gij、fijThe pixel values of the sub-platform image g and the sub-platform image f in the ith row and the jth column respectively; the smaller the energy difference of the images of different area blocks is, the higher the similarity of the two areas is represented.
Optionally, the expression of the pixel values of each point of the new image is:
Figure RE-GDA0003615948620000031
Figure RE-GDA0003615948620000032
in the formula (I), the compound is shown in the specification,xiis the abscissa of a pixel point in a certain splicing region; x is the number oflIs the abscissa of the leftmost pixel point in the splicing region; x is the number ofrIs the abscissa of the rightmost pixel point in the splicing region; mu is weight, and the value range of mu is 0-1.
Optionally, before the step of photographing the workpiece support platform to obtain the platform image, the method further includes the following steps:
laying a target with color at any vertex of a workpiece supporting platform, and placing a composite material workpiece on the workpiece supporting platform through a tool;
sequentially paving an air-permeable felt and a plastic film on the upper surface of the composite material workpiece, arranging the two-dimensional code on the plastic film, and then placing the workpiece supporting platform in front of a tank door of the autoclave;
and installing a camera above the workpiece supporting platform.
Optionally, the identifying the tool contour of the platform image to obtain tool contour point coordinate data includes:
determining coordinate data of a target according to the pixel value of the platform image;
and carrying out image preprocessing on the platform image to obtain air felt contour coordinate data, wherein the air felt contour coordinate data is tooling contour point coordinate data.
Optionally, the image preprocessing comprises:
graying and binarizing the platform image to obtain a binary image;
deleting the objects with the areas smaller than a threshold value C in the binary image;
and (4) using a Sobel edge detection algorithm to obtain air felt contour coordinate data.
Optionally, the obtaining of the tool position data according to the tool contour point coordinate data includes:
taking the target position of the workpiece supporting platform as a coordinate system origin;
and taking the coordinate of the tool contour point closest to the origin of the coordinate system in the tool contour point coordinate data as the position of the tool to obtain tool position data.
Optionally, the cutting the platform image according to the tool contour point coordinate data to obtain a plurality of tool images includes:
selecting a 4 neighborhood or 8 neighborhood mark connected region for the coordinate data of the tool contour point;
extracting the maximum x of the abscissa of the connected regionmaxThe minimum value x of the abscissaminMaximum value y of ordinatemaxOrdinate minimum value ymin
According to the maximum value x of the abscissamaxThe minimum value x of the abscissaminMaximum value y of ordinatemaxOrdinate minimum value yminAnd calculating the length and the width of the minimum rectangle containing the connected region, and respectively recording the length and the width as: dx is xmax-xmin,dy=ymax-ymin
And cutting the platform image by taking the tool contour point which is closest to the target position of the workpiece support platform in the tool contour point coordinate data as a starting point and the dx and dy as the cutting length and width to obtain a plurality of tool images.
The utility model provides an autoclave shaping frock locating place recognition device, includes:
the platform image acquisition module is used for shooting the workpiece supporting platform to obtain a platform image; the platform image comprises a two-dimensional code arranged on a composite material workpiece, the composite material workpiece is arranged on a tool, the tool is arranged on the workpiece supporting platform, and the two-dimensional code comprises tool information;
the contour data acquisition module is used for carrying out tool contour identification on the platform image so as to obtain tool contour point coordinate data;
the position data acquisition module is used for acquiring tool position data according to the tool contour point coordinate data;
the tool image acquisition module is used for cutting the platform image according to the tool contour point coordinate data to obtain a plurality of tool images;
the tool information acquisition module is used for identifying the two-dimensional code on the tool image so as to acquire the tool information on the tool image;
and the output module is used for outputting the tool position data and the tool information.
A computer device comprising a memory having a computer program stored therein and a processor executing the computer program, implements the method described above.
A computer-readable storage medium having a computer program stored thereon, the computer program being executable by a processor to implement the method described above.
The beneficial effect that this application can realize is as follows:
according to the method, the platform image of the workpiece supporting platform is collected, the platform image is subjected to contour recognition to obtain tool contour point coordinate data, real and accurate actual tool position data can be obtained through the tool contour point coordinate data, then the platform image is cut through the tool contour point coordinate data to obtain a plurality of tool images, tool information on each tool image is recognized through the two-dimensional code, the current tool placing position state can be recognized and recorded completely and accurately through the tool position data in combination with the tool information, therefore, the method can replace an operation mode of manually recording the placing position of the tool in the hot pressing tank, risks caused by manual recording errors are reduced to the minimum, the tool position data are recorded accurately, manual burdens can be saved, and the production efficiency is improved.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings that are needed in the detailed description of the present application or the technical solutions in the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a method for identifying a placement position of an autoclave molding tool in an embodiment of the present application;
fig. 2 is a schematic structural principle diagram in the process of identifying the position of the tool in the embodiment of the present application.
Reference numerals:
110-a workpiece supporting platform, 120-a tooling, 130-a composite material workpiece, 140-a two-dimensional code, 150-a camera, 160-a plastic film and 170-an autoclave.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that all the directional indications (such as up, down, left, right, front, and back … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, and the like in a specific posture, and if the specific posture is changed, the directional indication is changed accordingly.
In this application, unless expressly stated or limited otherwise, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" appearing throughout includes three juxtapositions, exemplified by "A and/or B" including either A or B or both A and B. In addition, technical solutions between the embodiments may be combined with each other, but must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory to each other or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope claimed in the present application.
Examples
Referring to fig. 1 to 2, the present embodiment provides a method for identifying a placement position of an autoclave molding tool, including the following steps:
shooting the workpiece support platform 110 to obtain a platform image; the platform image comprises a two-dimensional code 140 arranged on a composite material part 130, the composite material part 130 is arranged on a tool 120, the tool 120 is arranged on the part supporting platform 110, and the two-dimensional code 140 contains tool information;
carrying out tool outline identification on the platform image to obtain tool outline point coordinate data;
acquiring tool position data according to the tool contour point coordinate data;
cutting the platform image according to the tool outline point coordinate data to obtain a plurality of tool images;
identifying the two-dimensional code 140 on the tool image to obtain tool information on the tool image;
and outputting the tool position data and the tool information.
In this embodiment, the platform image of the workpiece support platform 110 is collected, the platform image is subjected to contour recognition to obtain tool contour point coordinate data, real and accurate actual tool position data can be obtained through the tool contour point coordinate data, then the platform image is cut through the tool contour point coordinate data to obtain a plurality of tool images, and the tool information on each tool image is recognized through the two-dimensional code 140, so that the current tool placement position state can be completely and accurately recognized and recorded through the tool position data and the tool information.
It should be noted that the tool information is the drawing number marking information of the tool 120 to verify the drawing number corresponding to the current tool 120, and the drawing number marking information is converted into the two-dimensional code 140, so that the scanning and the identification are facilitated. In addition, the position of the work support platform 110 needs to be specified in front of the door of the autoclave 170 before the work support platform 110 is photographed.
As an alternative embodiment, the step of capturing the object support platform 110 to obtain the platform image includes:
capturing the part support platform 110 by a camera 150 to obtain a platform image; the number of the cameras 150 is determined according to the size of the workpiece support platform 110 and the size of the field of view of the cameras 150.
In this embodiment, when selecting the number of cameras 150, the number of cameras 150 may be selected according to the field of view of the cameras 150 to ensure that the sum of the fields of view of all the cameras 150 covers the entire workpiece support platform 110.
It should be noted that, the shooting process of the camera 150 can be performed by the upper computer controller, and the degree of automation is high.
As an alternative embodiment, if the number of the cameras 150 is multiple; the step of capturing the object support platform 110 with the camera 150 to obtain a platform image includes:
capturing images of the article support platform 110 with a plurality of cameras 150 to obtain a plurality of sub-platform images;
and carrying out image splicing on the plurality of sub-platform images to obtain the platform image.
In the present embodiment, whether an image stitching function is required is determined according to the number of cameras 150, and when two or more cameras are selected, overlapping portions of the visual fields of two adjacent cameras need to be ensured, so that image stitching is required, and when one camera 150 is used, image stitching is not required.
As an optional implementation manner, the method for image stitching includes the following steps:
equally dividing each sub-platform image into 4 equal parts of areas, and respectively solving the area with the maximum normalized cross-correlation coefficient as a similar area for the different areas of each sub-platform image;
selecting a region block with high similarity, and extracting and matching image feature points of the sub-platform image by adopting an SIFT algorithm;
setting the region blocks with higher similarity as A and B respectively, and performing image fusion by using a weighted average method to generate pixel values of each point of a new image so as to obtain a target image;
and returning to the selected region block with high similarity, and performing image feature point extraction and matching on the sub-platform image by adopting an SIFT algorithm to obtain a spliced complete platform image.
In the embodiment, after the corresponding images are acquired by the plurality of cameras, when the images are spliced, each sub-platform image is divided into 4 equal parts of areas, the areas with the maximum normalized cross-correlation coefficient are respectively obtained as similar areas, so that the areas with high similarity are selected, the sub-platform images are subjected to image feature point extraction and matching by adopting an SIFT algorithm, then the two areas with high similarity are subjected to image fusion by using a weighted average method, pixel values of each point of a new image are generated to obtain a target image, and the target images obtained by processing all the sub-platform images are spliced by using the target image to obtain a complete platform image, so that the purpose of image splicing is realized, and the splicing is accurate and reliable.
It should be noted that the normalized cross-correlation coefficient is mainly used to describe the degree of similarity between two signals, and in image matching, the correlation coefficient is usually used to describe the degree of similarity between image sequences or images with different viewing angles, where ρ is greater than or equal to-1 and less than or equal to 1, when two images are completely the same, the correlation coefficient takes the value of 1, and when the gray level distribution between the two images is completely opposite, the correlation coefficient takes the value of-1.
The SIFT algorithm is a computer vision algorithm, is used for detecting and describing local features in an image, finds extreme points in a spatial scale, and extracts position, scale and rotation invariants of the extreme points.
As an alternative embodiment, assuming that the size of each region of the sub-platform image is, the normalized cross-correlation coefficient is:
Figure RE-GDA0003615948620000081
wherein g and f represent two different sub-platform images, respectively, gij、fijThe pixel values of the sub-platform image g and the sub-platform image f in the ith row and the jth column respectively,
Figure RE-GDA0003615948620000082
respectively representing the average values of the pixels of the corresponding areas;
the finishing formula (1) is as follows:
Figure RE-GDA0003615948620000083
wherein the content of the first and second substances,
Figure RE-GDA0003615948620000091
in the formula, Sgf、Sff、Sgg、SfAnd SgEnergy collectively referred to as images; gij、fijThe pixel values of the sub-platform image g and the sub-platform image f in the ith row and the jth column respectively; the smaller the energy difference of the images of different blocks of the area,the higher the similarity representing the two blocks.
As an alternative embodiment, the expression of the pixel values of each point of the new image is:
Figure RE-GDA0003615948620000092
Figure RE-GDA0003615948620000093
in the formula, xiIs the abscissa of a pixel point in a certain splicing region; x is the number oflIs the abscissa of the leftmost pixel point in the splicing region; x is the number ofrIs the abscissa of the rightmost pixel point of the splicing region; mu is weight, and the value range of mu is 0-1.
Thus, the arrangement of formulae (3) and (4) gives:
Figure RE-GDA0003615948620000094
as an alternative embodiment, before the step of capturing the object support platform 110 to obtain the platform image, the method further comprises the following steps:
laying a target with color at any vertex of the workpiece support platform 110, and placing the composite material workpiece 130 on the workpiece support platform 110 through the tool 120;
sequentially paving an air-permeable felt and a plastic film 160 on the upper surface of the composite material part 130, arranging the two-dimensional code 140 on the plastic film 160, and then placing the part supporting platform 110 in front of a tank door of an autoclave 170;
a camera 150 is mounted above the artefact support platform 110.
In this embodiment, after the arrangement, the next shooting and recognition operation can be performed, it should be noted that the target may be red, and has a striking color, so as to be convenient for the camera 150 to recognize, here, the airfelt and the plastic film 160 may be closely attached to the composite material part 130 after being vacuumized, and after the two-dimensional code 140 is attached to the plastic film 160, the two-dimensional code 140 of this embodiment needs to have corresponding characteristics, which are as follows: the PET material has good fatigue property and creep resistance, and can resist high temperature in an environment of 250 ℃.
As an optional implementation manner, the performing tool contour recognition on the platform image to obtain tool contour point coordinate data includes:
determining coordinate data of a target according to the pixel value of the platform image;
and carrying out image preprocessing on the platform image to obtain air felt contour coordinate data, wherein the air felt contour coordinate data is tooling contour point coordinate data.
In the embodiment, the outline of the airfelt is taken as a positioning area, and the coordinate data of the outline of the airfelt is obtained to be taken as the coordinate data of the outline point of the tool, so that the positioning is accurate, the recognition is accurate, and the position data of the tool is obtained. The tool profile may be referred to herein as C ═ P1,P2,...,Pi,...,PnIn which P isiAnd the coordinate data of the tool contour points.
As an optional implementation, the image preprocessing includes:
graying and binarizing the platform image to obtain a binary image;
deleting objects with the area smaller than a threshold value in the binary image;
and (4) using a Sobel edge detection algorithm to obtain air felt contour coordinate data.
In this embodiment, binarization is the simplest method of image segmentation, and is implemented by converting a grayscale image into a binary image, setting a pixel grayscale greater than a certain critical grayscale value as a grayscale maximum value, and setting a pixel grayscale smaller than this value as a grayscale minimum value; the sobel edge detection algorithm comprises two steps: firstly, a sobel operator is used for extracting gray difference information, namely an image gradient value, then boundary information is further extracted by using a single threshold value, and finally accurate outline coordinate data of the breathable felt can be obtained.
As an optional implementation manner, the obtaining of the tool position data according to the tool contour point coordinate data includes:
taking the target position of the workpiece support platform 110 as the origin of a coordinate system;
and taking the coordinate of the tool contour point closest to the origin of the coordinate system in the tool contour point coordinate data as the position of the tool 120 to obtain tool position data.
In this embodiment, the target position is used as the origin of the coordinate system, and then the coordinates of the tool contour point closest to the origin of the coordinate system in the tool contour point coordinate data are used as the position of the tool 120, which is more representative, so that the tool contour point coordinate data are used as the tool position data, which has authenticity and accuracy, and the tool position data are acquired.
As an optional implementation manner, the cutting the platform image according to the tool contour point coordinate data to obtain a plurality of tool images includes:
selecting a 4 neighborhood or 8 neighborhood mark connected region for the coordinate data of the tool contour point;
extracting the maximum x of the abscissa of the connected regionmaxAbscissa minimum value xminMaximum value y of ordinatemaxOrdinate minimum value ymin
According to the maximum value x of the abscissamaxThe minimum value x of the abscissaminMaximum value y of ordinatemaxOrdinate minimum value yminAnd calculating the length and the width of the minimum rectangle containing the connected region, and respectively recording as: dx is xmax-xmin,dy=ymax-ymin
And (3) cutting the platform image by taking a tooling contour point which is closest to the target position of the workpiece support platform 110 in each tooling contour point coordinate data as a starting point and dx and dy as the length and width of cutting to obtain a plurality of tooling images.
In the present embodiment, the objective of obtaining a plurality of tool outline images is achieved by cutting a complete platform image with the length and width of the minimum rectangle of the tool outline as the length and width of the cut, using the tool outline point with the closest distance to the target position of the workpiece support platform 110 in each tool outline parameter (i.e. the tool outline point coordinate data) as the starting point; then, the two-dimensional code 140 is used for identifying and recognizing the tool information on each tool image, so that the operation mode of manually recording the placing position of the tool 120 in the autoclave 170 can be replaced, the manual burden is saved, and the production efficiency is improved.
Example 2
As shown in fig. 2, the present embodiment provides an autoclave molding tool locating position recognition device, including:
a platform image acquisition module for photographing the workpiece support platform 110 to obtain a platform image; the platform image comprises a two-dimensional code 140 arranged on a composite material part 130, the composite material part 130 is arranged on a tool 120, the tool 120 is arranged on the part supporting platform 110, and the two-dimensional code 140 contains tool information;
the contour data acquisition module is used for carrying out tool contour identification on the platform image so as to obtain tool contour point coordinate data;
the position data acquisition module is used for acquiring tool position data according to the tool contour point coordinate data;
the tool image acquisition module is used for cutting the platform image according to the tool contour point coordinate data to obtain a plurality of tool images;
the tool information acquisition module is used for identifying the two-dimensional code 140 on the tool image so as to acquire the tool information on the tool image;
and the output module is used for outputting the tool position data and the tool information.
Example 3
The present embodiment provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to implement the method described in embodiment 1.
Example 4
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, and a processor executes the computer program to implement the method described in embodiment 1.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (14)

1. A method for identifying the placing position of an autoclave molding tool is characterized by comprising the following steps:
shooting a workpiece supporting platform to obtain a platform image; the platform image comprises a two-dimensional code arranged on a composite material workpiece, the composite material workpiece is arranged on a tool, the tool is arranged on the workpiece supporting platform, and the two-dimensional code comprises tool information;
carrying out tool outline identification on the platform image to obtain tool outline point coordinate data;
acquiring tool position data according to the tool contour point coordinate data;
cutting the platform image according to the tool outline point coordinate data to obtain a plurality of tool images;
identifying the two-dimensional code on the tool image to obtain tool information on the tool image;
and outputting the tool position data and the tool information.
2. The method for identifying the placement position of the autoclave molding tool according to claim 1, wherein the step of photographing the workpiece support platform to obtain the platform image comprises:
shooting the workpiece supporting platform through a camera to obtain a platform image; the number of the cameras is determined according to the size of the workpiece supporting platform and the size of the visual field of the cameras.
3. The autoclave molding tool placement position recognition method according to claim 2, wherein if the number of the cameras is plural; the step of capturing the workpiece support platform with a camera to obtain a platform image includes:
shooting the workpiece support platform through a plurality of cameras to obtain a plurality of sub-platform images;
and carrying out image splicing on the plurality of sub-platform images to obtain the platform image.
4. The method for identifying the placing position of the autoclave molding tool according to claim 3, wherein the image splicing method comprises the following steps:
equally dividing each sub-platform image into 4 equal parts of areas, and respectively solving the area with the maximum normalized cross-correlation coefficient as a similar area for the different areas of each sub-platform image;
selecting a region block with high similarity, and extracting and matching image feature points of the sub-platform image by adopting an SIFT algorithm;
setting the region blocks with higher similarity as A and B respectively, and performing image fusion by using a weighted average method to generate pixel values of each point of a new image so as to obtain a target image;
and returning to the selected region block with high similarity, and performing image feature point extraction and matching on the sub-platform image by adopting an SIFT algorithm to obtain a spliced complete platform image.
5. The method for identifying the placement position of the autoclave molding tool according to claim 4, wherein if the size of each area of the sub-platform image is mxn, the normalized cross-correlation coefficient is:
Figure FDA0003562850690000021
wherein the content of the first and second substances,
Figure FDA0003562850690000022
Figure FDA0003562850690000023
in the formula, gij、fijThe pixel values of the sub-platform image g and the sub-platform image f in the ith row and the jth column are respectively.
6. The method for identifying the placing position of the autoclave molding tool according to claim 5, wherein the expressions of the pixel values of each point of the new image are as follows:
Figure FDA0003562850690000024
Figure FDA0003562850690000025
in the formula, xiIs the abscissa of a pixel point in a certain splicing region; x is the number oflIs the abscissa of the leftmost pixel point in the splicing region; x is the number ofrIs the abscissa of the rightmost pixel point of the splicing region; mu is weight, and the value range of mu is 0-1.
7. The method for identifying the placement position of the autoclave molding tool according to claim 1, wherein before the step of photographing the workpiece support platform to obtain the platform image, the method further comprises the following steps:
laying a target with color at any vertex of a workpiece supporting platform, and placing a composite material workpiece on the workpiece supporting platform through a tool;
sequentially paving an air-permeable felt and a plastic film on the upper surface of the composite material workpiece, arranging the two-dimensional code on the plastic film, and then placing the workpiece supporting platform in front of a tank door of the autoclave;
and installing a camera above the workpiece supporting platform.
8. The autoclave molding tool placement position identification method according to claim 7, wherein the step of performing tool contour identification on the platform image to obtain tool contour point coordinate data comprises:
determining coordinate data of a target according to the pixel value of the platform image;
and carrying out image preprocessing on the platform image to obtain air felt contour coordinate data, wherein the air felt contour coordinate data is tooling contour point coordinate data.
9. The autoclave molding tool placement position recognition method according to claim 8, wherein the image preprocessing comprises:
graying and binarizing the platform image to obtain a binary image;
deleting the objects with the areas smaller than a threshold value C in the binary image;
and (4) using a Sobel edge detection algorithm to obtain air felt contour coordinate data.
10. The method for identifying the placement position of the autoclave molding tool according to claim 8, wherein the obtaining of the tool position data according to the tool contour point coordinate data comprises:
taking the target position of the workpiece supporting platform as a coordinate system origin;
and taking the coordinate of the tool contour point closest to the origin of the coordinate system in the tool contour point coordinate data as the position of the tool to obtain tool position data.
11. The method for identifying the placing position of the autoclave molding tool according to claim 8, wherein the cutting the platform image according to the tool contour point coordinate data to obtain a plurality of tool images comprises:
selecting a 4 neighborhood or 8 neighborhood mark connected region for the coordinate data of the tool contour point;
extracting the maximum x of the abscissa of the connected regionmaxThe minimum value x of the abscissaminMaximum value y of ordinatemaxOrdinate minimum value ymin
According to the maximum value x of the abscissamaxThe minimum value x of the abscissaminMaximum value y of ordinatemaxOrdinate minimum value yminAnd calculating the length and the width of the minimum rectangle containing the connected region, and respectively recording the length and the width as: x ismax-xmin,dy=ymax-ymin
And cutting the platform image by taking the tool contour point which is closest to the target position of the workpiece support platform in the tool contour point coordinate data as a starting point and the dx and dy as the cutting length and width to obtain a plurality of tool images.
12. The utility model provides an autoclave shaping frock locating place recognition device which characterized in that includes:
the platform image acquisition module is used for shooting the workpiece supporting platform to obtain a platform image; the platform image comprises a two-dimensional code arranged on a composite material workpiece, the composite material workpiece is arranged on a tool, the tool is arranged on the workpiece supporting platform, and the two-dimensional code comprises tool information;
the contour data acquisition module is used for carrying out tool contour recognition on the platform image to obtain tool contour point coordinate data;
the position data acquisition module is used for acquiring tool position data according to the tool contour point coordinate data;
the tool image acquisition module is used for cutting the platform image according to the tool contour point coordinate data to obtain a plurality of tool images;
the tool information acquisition module is used for identifying the two-dimensional code on the tool image so as to acquire the tool information on the tool image;
and the output module is used for outputting the tool position data and the tool information.
13. A computer arrangement, characterized in that the computer arrangement comprises a memory in which a computer program is stored and a processor which executes the computer program for implementing the method as claimed in any one of claims 1-11.
14. A computer-readable storage medium, having a computer program stored thereon, which, when executed by a processor, performs the method of any one of claims 1-11.
CN202210294304.8A 2022-03-24 2022-03-24 Method, device, equipment and medium for identifying placing position of autoclave molding tool Pending CN114708206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210294304.8A CN114708206A (en) 2022-03-24 2022-03-24 Method, device, equipment and medium for identifying placing position of autoclave molding tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210294304.8A CN114708206A (en) 2022-03-24 2022-03-24 Method, device, equipment and medium for identifying placing position of autoclave molding tool

Publications (1)

Publication Number Publication Date
CN114708206A true CN114708206A (en) 2022-07-05

Family

ID=82170376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210294304.8A Pending CN114708206A (en) 2022-03-24 2022-03-24 Method, device, equipment and medium for identifying placing position of autoclave molding tool

Country Status (1)

Country Link
CN (1) CN114708206A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273777A (en) * 2017-04-26 2017-10-20 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of Quick Response Code identification of code type method matched based on slide unit
CN110503682A (en) * 2019-08-08 2019-11-26 深圳市优讯通信息技术有限公司 The recognition methods of rectangle control, device, terminal and storage medium
CN112164001A (en) * 2020-09-29 2021-01-01 南京理工大学智能计算成像研究院有限公司 Digital microscope image rapid splicing and fusing method
CN112862692A (en) * 2021-03-30 2021-05-28 煤炭科学研究总院 Image splicing method applied to underground coal mine roadway
CN113298204A (en) * 2021-04-30 2021-08-24 成都飞机工业(集团)有限责任公司 Correlation method of data in composite material manufacturing process
CN113298090A (en) * 2021-05-19 2021-08-24 成都飞机工业(集团)有限责任公司 Autoclave aviation composite material blank identification method based on maximum profile
CN113362362A (en) * 2021-06-17 2021-09-07 易普森智慧健康科技(深圳)有限公司 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273777A (en) * 2017-04-26 2017-10-20 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of Quick Response Code identification of code type method matched based on slide unit
CN110503682A (en) * 2019-08-08 2019-11-26 深圳市优讯通信息技术有限公司 The recognition methods of rectangle control, device, terminal and storage medium
CN112164001A (en) * 2020-09-29 2021-01-01 南京理工大学智能计算成像研究院有限公司 Digital microscope image rapid splicing and fusing method
CN112862692A (en) * 2021-03-30 2021-05-28 煤炭科学研究总院 Image splicing method applied to underground coal mine roadway
CN113298204A (en) * 2021-04-30 2021-08-24 成都飞机工业(集团)有限责任公司 Correlation method of data in composite material manufacturing process
CN113298090A (en) * 2021-05-19 2021-08-24 成都飞机工业(集团)有限责任公司 Autoclave aviation composite material blank identification method based on maximum profile
CN113362362A (en) * 2021-06-17 2021-09-07 易普森智慧健康科技(深圳)有限公司 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苗荣慧等: ""图像块改进Harris角点检测的农田图像拼接"", 《现代电子技术》, vol. 44, no. 2, 15 January 2021 (2021-01-15), pages 76 - 78 *

Similar Documents

Publication Publication Date Title
EP3382644B1 (en) Method for 3d modelling based on structure from motion processing of sparse 2d images
CN111968172B (en) Method and system for measuring volume of stock ground material
Krause et al. 3d object representations for fine-grained categorization
CN107424142B (en) Weld joint identification method based on image significance detection
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
JP5385105B2 (en) Image search method and system
CN112906694A (en) Reading correction system and method for inclined pointer instrument image of transformer substation
JP2008224626A (en) Information processor, method for processing information, and calibration tool
JP4946878B2 (en) Image identification apparatus and program
CN101556647A (en) mobile robot visual orientation method based on improved SIFT algorithm
CN111476883B (en) Three-dimensional posture trajectory reconstruction method and device for multi-view unmarked animal
CN110765992A (en) Seal identification method, medium, equipment and device
CN111553422A (en) Automatic identification and recovery method and system for surgical instruments
CN110634131A (en) Crack image identification and modeling method
CN112598066A (en) Lightweight road pavement detection method and system based on machine vision
CN110288040B (en) Image similarity judging method and device based on topology verification
CN113298090B (en) Autoclave aviation composite material blank identification method based on maximum profile
Li et al. A novel framework for urban change detection using VHR satellite images
CN112686872B (en) Wood counting method based on deep learning
CN113724329A (en) Object attitude estimation method, system and medium fusing plane and stereo information
CN114708206A (en) Method, device, equipment and medium for identifying placing position of autoclave molding tool
CN115049842B (en) Method for detecting damage of aircraft skin image and positioning 2D-3D
CN111667429B (en) Target positioning correction method for inspection robot
CN114241150A (en) Water area data preprocessing method in oblique photography modeling
CN112991372A (en) 2D-3D camera external parameter calibration method based on polygon matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination