CN114782778B - Assembly state monitoring method and system based on machine vision technology - Google Patents

Assembly state monitoring method and system based on machine vision technology Download PDF

Info

Publication number
CN114782778B
CN114782778B CN202210440344.9A CN202210440344A CN114782778B CN 114782778 B CN114782778 B CN 114782778B CN 202210440344 A CN202210440344 A CN 202210440344A CN 114782778 B CN114782778 B CN 114782778B
Authority
CN
China
Prior art keywords
state
fan rotor
picture
model
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210440344.9A
Other languages
Chinese (zh)
Other versions
CN114782778A (en
Inventor
魏丽军
王孙康宏
姚绍文
刘婷
刘强
王满贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202210440344.9A priority Critical patent/CN114782778B/en
Publication of CN114782778A publication Critical patent/CN114782778A/en
Application granted granted Critical
Publication of CN114782778B publication Critical patent/CN114782778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

An assembly state monitoring method based on a machine vision technology is applied to an aviation fan rotor equipment process, and is characterized by comprising the following steps: step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures; step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor; and step S3: and matching the state stage of the current aviation fan rotor with the assembly stage of the current aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor. The templates of all the assembling states are obtained through the recognition model, and the state pictures of the aviation fan rotor in the assembling stage are collected in real time to be matched with the templates, so that the automatic judgment of the assembling states is realized, and the assembling accuracy is improved.

Description

Assembly state monitoring method and system based on machine vision technology
Technical Field
The invention relates to the technical field of monitoring, in particular to an assembly state monitoring method and system based on a machine vision technology.
Background
The fan rotor is used as an important part of the aero-engine and has the characteristics of complex structure, large quantity of parts and connecting pieces, high precision requirement, high manufacturing cost and the like. The assembly is a key process for guaranteeing the quality, performance and service life of a heavy product, the workload accounts for a large amount, and the manufacturing rate of the aeroengine accounts for more than 40%. The assembly of the fan rotor of the aircraft engine is realized by centralized assembly in a limited space of discrete fixed stations, the process difficulty is high, the manual operation amount is large, and the assembly quality reliability and the performance stability are difficult to control. In order to ensure the high efficiency and reliability of the assembly quality, workers need to be ensured to operate strictly according to the process standard. In the traditional assembly process, the state information of an assembly field can not be sensed in real time, and meanwhile, workers can frequently operate according to process standards for convenience, so that the assembly quality reliability and the assembly qualified rate of products are low.
Disclosure of Invention
In view of the above-mentioned drawbacks, the present invention provides an assembly status monitoring method and system based on machine vision technology, which can realize self-recognition and judgment of the assembly status of the device and improve the assembly accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme: an assembly state monitoring method based on a machine vision technology is applied to an aviation fan rotor equipment process and comprises the following steps:
step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures;
step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor;
and step S3: and matching the state stage of the current aviation fan rotor with the assembly stage of the current aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor.
Preferably, the training process of identifying the template in step S2 is as follows:
step S21: shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
step S22: dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
step S23: marking an assembly stage in the picture;
step S24: performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, scaling, cutting, translating and adding noise on the picture;
step S25: and inputting the training set pictures and the verification set pictures into the recognition model, acquiring the characteristic vectors, and finishing the training of the recognition template.
Preferably, in step S25, a YOLO model is selected as an identification model to obtain feature vectors in a training set picture and a verification set picture, a convolution attention module is arranged in the YOLO model, and the convolution attention module is arranged between a backbone and a neck of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors to obtain channel attention Mc;
and sequentially performing maximum pooling and average pooling on the feature vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the feature vectors, performing convolution operation on the middle vector, and taking the result of the convolution operation as precise and active input of a sigmoid function to obtain the spatial attention MS.
Preferably, only one hidden layer is provided in the shared MLP.
Preferably, the loss function for the accuracy judgment of the YOLO model is as follows:
Loss=λ 1 GIOV+λ 2 DIOV+λ 3 CIOV;
wherein said λ 1 、λ 2 、λ 3 Is a proportionality coefficient, λ 1 、λ 2 、λ 3 Satisfies the following relationship lambda 123 =1,giov is shape loss, DIOV is area loss, CIOV is position loss;
wherein
Figure BDA0003614856220000031
Figure BDA0003614856220000032
Figure BDA0003614856220000033
Wherein
Figure BDA0003614856220000034
Wherein a is the length of a prediction frame of the feature vector in the state picture, b is the width of the prediction frame of the feature vector in the state picture, c is the length of an inner template of a YOLO model, d is the width of the inner template of the YOLO model, IOU is the intersection ratio of the prediction frame and the template in the state picture, and p is 2 Euclidean distance between the center of the prediction frame and the center of the template in the representation state picture, b t ,b gt Respectively representing the center and state of the templateThe center of the prediction frame in the picture, E represents the shortest diagonal length of the prediction frame and the minimum bounding box of the template in the state picture, beta is a positive weight coefficient, and v is an aspect ratio consistency coefficient.
An assembly state monitoring system based on a machine vision technology uses the assembly state monitoring method based on the machine vision technology, and is characterized by comprising an equipment layer, a control layer and a model layer;
the equipment layer is provided with camera equipment, state pictures of the aviation fan rotor in the assembling process are periodically acquired through the camera equipment, and the visualized state pictures are sent to the control layer;
the control layer is provided with a display device, and the display device is used for displaying visual state pictures and receiving feedback of the model layer;
the model layer comprises a model processing module and a judging module, the model processing module is used for identifying the state stage of the aviation fan rotor in the state picture,
the judging module is used for acquiring the state stage of the aviation fan rotor, matching the current state stage of the aviation fan rotor with the current assembly stage of the aviation fan rotor, and feeding back that the current aviation fan rotor of the control layer is installed wrongly if the current state stage of the aviation fan rotor is not matched with the current assembly stage of the aviation fan rotor.
Preferably, the model processing module further comprises a template training submodule;
the template training submodule is used for:
shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
marking an assembly stage in the picture;
performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, scaling, cutting, translating and adding noise on the picture;
and inputting the training set pictures and the verification set pictures into the recognition model, acquiring the characteristic vectors, and finishing the training of the recognition template.
Preferably, the template training submodule further comprises a model subunit;
the model subunit selects a YOLO model as a recognition model to obtain the feature vectors in the training set picture and the verification set picture, wherein a convolution attention module is arranged in the YOLO model, and the convolution attention module is arranged between a backbone and a neck of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors to obtain channel attention Mc;
and sequentially carrying out maximum pooling and average pooling on the characteristic vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the characteristic vectors, carrying out convolution operation on the middle vector, and taking the result of the convolution operation as the precise and alive input of the sigmoid function to obtain the spatial attention MS.
One of the above technical solutions has the following advantages or beneficial effects: 1. the templates of all the assembling states are obtained through the recognition model, and the state pictures of the aviation fan rotor in the assembling stage are collected in real time to be matched with the templates, so that the automatic judgment of the assembling states is realized, and the assembling accuracy is improved.
2. Compared with the shape loss GIOV only used in the traditional YOLO, the position loss CIOV in the invention adds a penalty term of the ratio of the center distance of the prediction frame and the template and the length and the width in the state picture into the loss term, so that the network can ensure the faster convergence of the prediction frame during training and obtain higher regression positioning precision.
Drawings
FIG. 1 is a flow chart of one embodiment of the method of the present invention.
Fig. 2 is a schematic structural diagram of one embodiment of the system of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "axial", "radial", "circumferential", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
As shown in fig. 1-2, an assembly state monitoring method based on machine vision technology, which is applied to an aviation fan rotor assembly process, includes the following steps:
step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures;
step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor;
and step S3: and matching the state stage of the current aviation fan rotor with the assembly stage of the current aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor.
Because the assembly of the aviation fan rotor is detected by adopting a manual inspection mode in the prior art, the efficiency is very low, and because the number of the inspection is large, more inspection workers are required to be added to support the inspection of the production. Therefore, in the method, the state pictures of the aviation fan rotor in the assembly stage are periodically acquired, then the state pictures are visually operated, and are displayed in a human observable mode, for example, the state pictures are converted into a format which can be displayed by a display screen and are displayed in the display screen, workers can manually detect the aviation fan rotors through the display screen, in addition, a plurality of folders are set according to the assembly process of the aviation fan rotor before the method is executed, and each folder only stores the state picture of one assembly process.
After the state pictures of the aviation fan rotor are obtained, preprocessing operation is carried out on the state pictures, wherein the preprocessing operation comprises the steps of going through the state pictures under each folder, obtaining indexes of the corresponding assembly stages of the folders, rapidly finding out the state pictures at different time points in the assembly stages through the indexes, and tracing in time.
And inputting the state pictures into the recognition model according to an index sequence, wherein the recognition model is trained in advance, after the state pictures are input, the recognition model can output the current state stage of the aviation fan rotor, then the state stage of the previous aviation fan rotor is matched with the assembly stage of the aviation fan rotor, if the state stages are successfully matched, the current aviation fan rotor is correctly installed, and no installation error exists, and if the state stages of the aviation fan rotor are not matched with the assembly stage of the aviation fan rotor, the installation error exists, at the moment, the information of the installation error is fed back to a display screen to remind an inspection worker. The following example is given by way of illustration:
and a certain camera device acquires a state picture of the aviation fan rotor in the third assembly stage, and then the state picture of the aviation fan rotor in the third assembly stage is input into the identification model, the identification result output by the identification model is that the current state stage of the aviation fan rotor is the first assembly stage, the current state stage of the aviation fan rotor is not matched with the assembly stage during shooting, and a prompt that the current aviation fan rotor is installed wrongly is sent out.
Preferably, the training process of identifying the template in step S2 is as follows:
step S21: shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
step S22: dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
step S23: marking an assembly stage in the picture;
step S24: performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, scaling, cutting, translating and adding noise on the picture;
step S25: and inputting the training set picture and the verification set picture into the recognition model, acquiring the characteristic vector and finishing the training of the recognition template.
In the invention, the identification model needs to be trained in advance, firstly, a plurality of pictures of the rotor of the aviation fan at different assembly stages need to be shot before training, and about 100 pictures need to be shot at each assembly stage, so that the identification model can be trained by enough materials. The pictures are then divided into a training set picture and a verification set picture in proportion, which in one embodiment is 7:3. and then, labeling an assembly stage on the picture, so that the assembly stage is hooked with the recognition result of the recognition model, and then performing data enhancement operation on the picture, wherein the data enhancement operation can increase the number of samples of the recognition model and the diversity of the samples during training, thereby improving the generalization capability and robustness of the recognition model and reducing the influence of additional factors on recognition in all aspects.
Preferably, in step S25, a YOLO model is selected as an identification model to obtain feature vectors in a training set picture and a verification set picture, a convolution attention module is arranged in the YOLO model, and the convolution attention module is arranged between a backbone and a neck of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors to obtain channel attention Mc;
and sequentially carrying out maximum pooling and average pooling on the characteristic vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the characteristic vectors, carrying out convolution operation on the middle vector, and taking the result of the convolution operation as the precise and alive input of the sigmoid function to obtain the spatial attention MS.
Since a worker can place an installed part beside the aviation fan rotor during the assembly process of the aviation fan rotor, the part can influence the identification of the identification model. In order to improve the recognition model for the aviation fan rotor, a convolution attention module is added in the structure of the YOLO model, wherein the YOLO model is an existing model structure and has the structure of input → back bone → negk → head. The convolution attention module is fused between the backbone and the neck in the present invention. The backsbone in the YOLO is the most critical feature extraction part, so that the convolution attention module is fused after the backsbone and before the feature fusion of the Neck network, the reason for doing so is that the feature extraction is completed in the backsbone, prediction output is carried out on different feature maps after the Neck feature fusion, the attention reconstruction is carried out in the convolution attention module, the effect of starting from the top can be achieved, the CAM sub-module and the SAM sub-module are arranged in the convolution attention module, the CAM sub-module and the SAM sub-module generate weight for each feature channel through parameters, the importance degree of each feature channel is modeled, and then different channels are enhanced or inhibited for different tasks. The method has the advantages that the characteristic diagram in the middle of the network is reconstructed, important characteristics are emphasized, general characteristics are suppressed, and the purpose of improving the target detection effect is achieved.
Preferably, only one hidden layer is provided in the shared MLP.
Only one hidden layer is arranged, so that the calculation amount of the feature vector can be reduced, and the training speed is improved.
Preferably, the loss function for the accuracy judgment of the YOLO model is as follows:
Loss=λ 1 GIOV+λ 2 DIOV+λ 3 CIOV;
wherein said λ 1 、λ 2 、λ 3 Is a proportionality coefficient, λ 1 、λ 2 、λ 3 Satisfies the following relationship lambda 123 =1,giov is shape loss, DIOV is area loss, CIOV is position loss;
wherein
Figure BDA0003614856220000091
Figure BDA0003614856220000092
Figure BDA0003614856220000101
Wherein
Figure BDA0003614856220000102
Wherein a is the length of a prediction frame of the feature vector in the state picture, b is the width of the prediction frame of the feature vector in the state picture, c is the length of a template in a YOLO model, d is the width of the template in the YOLO model, IOU is the intersection ratio of the prediction frame and the template in the state picture, and p is 2 Representing the Euclidean distance between the center of the prediction frame and the center of the template in the state picture, b t ,b gt Respectively representing the center of the template and the center of the prediction frame in the state picture, E representing the shortest diagonal length of the prediction frame in the state picture and the minimum surrounding frame of the template, beta being a positive weight coefficient, and v being an aspect ratio consistency coefficient.
In the invention, the aerial fan rotor may not be completely appeared in the state picture during shooting, or the placement position of workers is uncertain, so that the size ratio of the aerial fan rotor in the state picture is uncertain. Shape loss, area loss, and position loss are added to the setting of the loss function of the present invention to compensate for the misjudgment of the above-described situation in the recognition model.
The advantage of shape-loss GIOV is scale invariance, i.e. the similarity of the prediction box and the template in the state picture is independent of their spatial scale size. However, the shape loss GIOV has a problem in that when a prediction frame or a template is completely surrounded by an opposite side in a state picture, the shape loss GIOV is completely degenerated into a loss of an intersection ratio, since it depends heavily on an intersection ratio term, convergence speed is too slow in actual training, and accuracy of a predicted bounding box is low.
The area loss DIOV is added in the application, the defect of the shape loss GIOV is considered, the calculation mode is changed into the calculation mode of the Euclidean distance between the central points of all the detection frames, and therefore the degradation problem in the shape loss GIOV is solved.
The position loss CIOV increases the loss of the prediction frame scale on the basis of the shape loss DIOV, and simultaneously considers the overlapping area of the prediction frame and the template in the current state picture, the distance of the central point and the length-width ratio.
Compared with the shape loss GIOV only used in the traditional YOLO, the position loss CIOV in the invention adds a penalty term of the ratio of the center distance of the prediction frame and the template and the length and the width in the state picture into the loss term, so that the network can ensure the faster convergence of the prediction frame during training and obtain higher regression positioning precision.
An assembly state monitoring system based on a machine vision technology uses the assembly state monitoring method based on the machine vision technology, and is characterized by comprising an equipment layer, a control layer and a model layer;
the equipment layer is provided with camera equipment, state pictures of the aviation fan rotor in the assembling process are periodically acquired through the camera equipment, and the visualized state pictures are sent to the control layer;
the control layer is provided with a display device, and the display device is used for displaying visual state pictures and receiving feedback of the model layer;
the model layer comprises a model processing module and a judging module, the model processing module is used for identifying the state stage of the aviation fan rotor in the state picture,
the judging module is used for acquiring the state stage of the aviation fan rotor, matching the state stage of the current aviation fan rotor with the assembling stage of the current aviation fan rotor, and feeding back that the current aviation fan rotor of the control layer is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembling stage of the current aviation fan rotor.
Preferably, the model processing module further comprises a template training sub-module;
the template training submodule is used for:
shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
marking an assembly stage in the picture;
performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, scaling, cutting, translating and adding noise on the picture;
and inputting the training set picture and the verification set picture into the recognition model, acquiring the characteristic vector and finishing the training of the recognition template.
Preferably, the template training sub-module further comprises a model sub-unit;
the model subunit selects a YOLO model as a recognition model to obtain feature vectors in a training set picture and a verification set picture, wherein the YOLO model is provided with a convolution attention module, and the convolution attention module is arranged between a backbone and a tack of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors through the shared MLP to obtain channel attention Mc;
and sequentially performing maximum pooling and average pooling on the feature vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the feature vectors, performing convolution operation on the middle vector, and taking the result of the convolution operation as precise and active input of a sigmoid function to obtain the spatial attention MS.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (4)

1. An assembly state monitoring method based on a machine vision technology is applied to an aviation fan rotor equipment process, and is characterized by comprising the following steps:
step S1: periodically acquiring state pictures of the aviation fan rotor in an assembly stage, and performing visualization operation and preprocessing operation on the state pictures;
step S2: inputting the processed state picture into a recognition model to judge the state of the aviation fan rotor in the state picture, and acquiring the state stage of the current aviation fan rotor;
and step S3: matching the current state stage of the aviation fan rotor with the current assembly stage of the aviation fan rotor, and prompting that the current aviation fan rotor is installed wrongly if the state stage of the current aviation fan rotor is not matched with the assembly stage of the current aviation fan rotor;
the training process of identifying the template in the step S2 is as follows:
step S21: shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
step S22: dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
step S23: marking an assembly stage in the picture;
step S24: performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, zooming, cutting, translating and adding noise on the picture;
step S25: inputting the training set picture and the verification set picture into the recognition model, acquiring a characteristic vector, and finishing the training of the recognition template;
in the step S25, selecting a YOLO model as an identification model to obtain feature vectors in a training set picture and a verification set picture, where the YOLO model is provided with a convolution attention module, and the convolution attention module is arranged between a backbone and a nack of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors through the shared MLP to obtain channel attention Mc;
and sequentially performing maximum pooling and average pooling on the feature vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the feature vectors, performing convolution operation on the middle vector, and taking the result of the convolution operation as precise and active input of a sigmoid function to obtain the spatial attention MS.
2. The assembly state monitoring method based on machine vision technology as claimed in claim 1, wherein only one hidden layer is provided in the shared MLP.
3. The assembly state monitoring method based on machine vision technology as claimed in claim 2, characterized in that the loss function for the accuracy judgment of the YOLO model is as follows:
Loss=λ 1 GIOV+λ 2 DIOV+λ 3 CIOV;
wherein said λ 1 、λ 2 、λ 3 Is a proportionality coefficient, λ 1 、λ 2 、λ 3 Satisfies the following relationship lambda 123 =1,giov is shape loss, DIOV is area loss, CIOV is position loss;
wherein
Figure FDA0003852102890000021
Figure FDA0003852102890000022
Figure FDA0003852102890000023
Wherein
Figure FDA0003852102890000024
Wherein a is the length of a prediction frame of the feature vector in the state picture, b is the width of the prediction frame of the feature vector in the state picture, c is the length of an inner template of a YOLO model, d is the width of the inner template of the YOLO model, IOU is the intersection ratio of the prediction frame and the template in the state picture, and p is 2 Representing the Euclidean distance between the center of the prediction frame and the center of the template in the state picture, b t ,b gt Respectively representing the center of the template and the center of the prediction frame in the state picture, E representing the shortest diagonal length of the prediction frame in the state picture and the minimum surrounding frame of the template, beta being a positive weight coefficient, and v being an aspect ratio consistency coefficient.
4. An assembly state monitoring system based on machine vision technology, which uses the assembly state monitoring method based on machine vision technology of any one of claims 1 to 3, and is characterized by comprising a device layer, a control layer and a model layer;
the equipment layer is provided with camera equipment, state pictures of the aviation fan rotor in the assembling process are periodically acquired through the camera equipment, and the visualized state pictures are sent to the control layer;
the control layer is provided with a display device, and the display device is used for displaying visual state pictures and receiving feedback of the model layer;
the model layer comprises a model processing module and a judging module, the model processing module is used for identifying the state stage of the aviation fan rotor in the state picture,
the judging module is used for acquiring the state stage of the aviation fan rotor, matching the current state stage of the aviation fan rotor with the current assembly stage of the aviation fan rotor, and feeding back that the current aviation fan rotor of the control layer is installed wrongly if the current state stage of the aviation fan rotor is not matched with the current assembly stage of the aviation fan rotor;
the model processing module also comprises a template training submodule;
the template training submodule is used for:
shooting a plurality of groups of pictures of the aviation fan rotor at different assembly stages;
dividing a plurality of groups of state pictures into training set pictures and verification set pictures according to a proportion;
marking an assembly stage in the picture;
performing data enhancement operation on the picture, wherein the data enhancement operation comprises turning, rotating, scaling, cutting, translating and adding noise on the picture;
inputting the training set picture and the verification set picture into the recognition model, acquiring a characteristic vector, and finishing the training of the recognition template;
the template training submodule also comprises a model subunit;
the model subunit selects a YOLO model as a recognition model to obtain the feature vectors in the training set picture and the verification set picture, wherein a convolution attention module is arranged in the YOLO model, and the convolution attention module is arranged between a backbone and a neck of the YOLO model;
the convolution attention module comprises a CAM submodule and an SAM submodule, and the training set picture and the verification set picture sequentially pass through the CAM submodule and the SAM submodule;
performing maximum pooling and average pooling on the feature vectors in the CAM sub-module at the same time, then performing element-by-element addition on the feature vectors output by the shared MLP and performing sigmoid function activation on the feature vectors to obtain channel attention Mc;
and sequentially carrying out maximum pooling and average pooling on the characteristic vectors in the SAM submodule along the channel direction of the SAM submodule, then obtaining a middle vector of the characteristic vectors, carrying out convolution operation on the middle vector, and taking the result of the convolution operation as the precise and alive input of the sigmoid function to obtain the spatial attention MS.
CN202210440344.9A 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology Active CN114782778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210440344.9A CN114782778B (en) 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210440344.9A CN114782778B (en) 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology

Publications (2)

Publication Number Publication Date
CN114782778A CN114782778A (en) 2022-07-22
CN114782778B true CN114782778B (en) 2023-01-06

Family

ID=82433967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210440344.9A Active CN114782778B (en) 2022-04-25 2022-04-25 Assembly state monitoring method and system based on machine vision technology

Country Status (1)

Country Link
CN (1) CN114782778B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842020A (en) * 2016-12-26 2017-06-13 青岛海尔空调器有限总公司 The detection method and air-conditioner of the motor setup error of air-conditioner
CN109190575A (en) * 2018-09-13 2019-01-11 深圳增强现实技术有限公司 Assemble scene recognition method, system and electronic equipment
CN109657535A (en) * 2018-10-30 2019-04-19 银河水滴科技(北京)有限公司 Image identification method, target device and cloud platform
CN109816049A (en) * 2019-02-22 2019-05-28 青岛理工大学 A kind of assembly monitoring method, equipment and readable storage medium storing program for executing based on deep learning
CN109948207A (en) * 2019-03-06 2019-06-28 西安交通大学 A kind of aircraft engine high pressure rotor rigging error prediction technique
CN111079630A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Fault identification method for railway wagon brake beam with incorrect installation position
CN111429418A (en) * 2020-03-19 2020-07-17 天津理工大学 Industrial part detection method based on YO L O v3 neural network
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN111624215A (en) * 2020-05-26 2020-09-04 戴姆勒股份公司 Method for the non-destructive testing of internal assembly defects of a part
CN111707458A (en) * 2020-05-18 2020-09-25 西安交通大学 Rotor monitoring method based on deep learning signal reconstruction
CN112503725A (en) * 2020-12-08 2021-03-16 珠海格力电器股份有限公司 Air conditioner self-cleaning control method and device and air conditioner
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN113269234A (en) * 2021-05-10 2021-08-17 青岛理工大学 Connecting piece assembly detection method and system based on target detection
CN114329806A (en) * 2021-11-02 2022-04-12 上海海事大学 Engine rotor bolt assembling quality evaluation method based on BP neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4647514B2 (en) * 2006-02-17 2011-03-09 株式会社日立ソリューションズ Aerial image processing apparatus and aerial image processing method
US10504220B2 (en) * 2017-05-25 2019-12-10 General Electric Company Neural network feature recognition system
CN112794274B (en) * 2021-04-08 2021-07-06 南京东富智能科技股份有限公司 Safety monitoring method and system for oil filling port at bottom of oil tank truck
CN113838013A (en) * 2021-09-13 2021-12-24 中国民航大学 Blade crack real-time detection method and device in aero-engine operation and maintenance based on YOLOv5

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842020A (en) * 2016-12-26 2017-06-13 青岛海尔空调器有限总公司 The detection method and air-conditioner of the motor setup error of air-conditioner
CN109190575A (en) * 2018-09-13 2019-01-11 深圳增强现实技术有限公司 Assemble scene recognition method, system and electronic equipment
CN109657535A (en) * 2018-10-30 2019-04-19 银河水滴科技(北京)有限公司 Image identification method, target device and cloud platform
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN109816049A (en) * 2019-02-22 2019-05-28 青岛理工大学 A kind of assembly monitoring method, equipment and readable storage medium storing program for executing based on deep learning
CN109948207A (en) * 2019-03-06 2019-06-28 西安交通大学 A kind of aircraft engine high pressure rotor rigging error prediction technique
CN111079630A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Fault identification method for railway wagon brake beam with incorrect installation position
CN111429418A (en) * 2020-03-19 2020-07-17 天津理工大学 Industrial part detection method based on YO L O v3 neural network
CN111707458A (en) * 2020-05-18 2020-09-25 西安交通大学 Rotor monitoring method based on deep learning signal reconstruction
CN111624215A (en) * 2020-05-26 2020-09-04 戴姆勒股份公司 Method for the non-destructive testing of internal assembly defects of a part
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN112503725A (en) * 2020-12-08 2021-03-16 珠海格力电器股份有限公司 Air conditioner self-cleaning control method and device and air conditioner
CN113269234A (en) * 2021-05-10 2021-08-17 青岛理工大学 Connecting piece assembly detection method and system based on target detection
CN114329806A (en) * 2021-11-02 2022-04-12 上海海事大学 Engine rotor bolt assembling quality evaluation method based on BP neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
双锥静压轴承结构设计与FLUENT仿真分析;魏棵榕等;《机械》;20190515(第05期);全文 *
基于YOLO3的活塞连杆智能防错***开发;张俊等;《内燃机》;20201015(第05期);全文 *
基于改进的YOLO V3算法汽车零件配置辨识;张丽秀等;《组合机床与自动化加工技术》;20200620(第06期);全文 *
基于机器视觉和深度神经网络的零件装配检测;魏中雨等;《组合机床与自动化加工技术》;20200320(第03期);全文 *
工业机器人和机器视觉在风力发电机组变桨轴承装配中的应用;孙振军等;《上海电气技术》;20180930(第03期);全文 *

Also Published As

Publication number Publication date
CN114782778A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN111368690B (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN109523518B (en) Tire X-ray defect detection method
CN110648310B (en) Weak supervision casting defect identification method based on attention mechanism
JP6012060B2 (en) Image rotation based on image content to correct image orientation
CN109284729A (en) Method, apparatus and medium based on video acquisition human face recognition model training data
US8208737B1 (en) Methods and systems for identifying captions in media material
CN107507174A (en) Power plant's instrument equipment drawing based on hand-held intelligent inspection is as recognition methods and system
US20170193131A1 (en) Manufacturing process visualization apparatus and method
CN109859164A (en) A method of by Quick-type convolutional neural networks to PCBA appearance test
WO2020093603A1 (en) High-intensity multi-directional fdm 3d printing method based on stereoscopic vision monitoring
CN110119768A (en) Visual information emerging system and method for vehicle location
WO2021181647A1 (en) Image processing device, image processing method, and computer-readable medium
CN117152484B (en) Small target cloth flaw detection method based on improved YOLOv5s
CN113396424A (en) System and method for automated material extraction
CN114782778B (en) Assembly state monitoring method and system based on machine vision technology
CN115147380A (en) Small transparent plastic product defect detection method based on YOLOv5
CN117237367B (en) Spiral blade thickness abrasion detection method and system based on machine vision
CN113269234B (en) Connecting piece assembly detection method and system based on target detection
US20050154558A1 (en) Method, system and computer program product for automated discovery and presentation of the direction of flow through components represented in a drawing set
CN102216161A (en) Method for aligning a container
CN110766663A (en) Intelligent method for multi-scale grading and content visualization of diamond
Han et al. BIM-assisted structure-from-motion for analyzing and visualizing construction progress deviations through daily site images and BIM
US20240078654A1 (en) System and method for inspection of a wind turbine blade shell part
Rio-Torto et al. Hybrid Quality Inspection for the Automotive Industry: Replacing the Paper-Based Conformity List through Semi-Supervised Object Detection and Simulated Data
CN114529534A (en) Full-automatic coreless motor winding machine product defect detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant