CN114120317B - Optical element surface damage identification method based on deep learning and image processing - Google Patents

Optical element surface damage identification method based on deep learning and image processing Download PDF

Info

Publication number
CN114120317B
CN114120317B CN202111428135.4A CN202111428135A CN114120317B CN 114120317 B CN114120317 B CN 114120317B CN 202111428135 A CN202111428135 A CN 202111428135A CN 114120317 B CN114120317 B CN 114120317B
Authority
CN
China
Prior art keywords
model
image
target point
damage
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111428135.4A
Other languages
Chinese (zh)
Other versions
CN114120317A (en
Inventor
陈明君
尹朝阳
赵林杰
程健
袁晓东
郑万国
廖威
王海军
张传超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202111428135.4A priority Critical patent/CN114120317B/en
Publication of CN114120317A publication Critical patent/CN114120317A/en
Application granted granted Critical
Publication of CN114120317B publication Critical patent/CN114120317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

An optical element surface damage identification method based on deep learning and image processing relates to the technical field of element surface damage identification and is used for solving the problem of low accuracy of identifying large-caliber element surface damage in the prior art. The technical key points of the invention include: the method for automatically collecting and marking the surface defects and pollutant data of the optical element is provided, so that the acquisition efficiency of a data set is improved; the image processing is utilized to realize the interception and data enhancement of the target point area, so that the attention of the model is focused on the target point area; the three light source synthesized images are used as training and prediction basis, so that the classification accuracy of the model is improved; and constructing a damage prediction model based on ResNeXt, introducing transfer learning into a model training process, and verifying the effectiveness of the model. The invention realizes the construction of the damage prediction model and provides technical support for the automatic detection and repair of the damage points of the large-caliber element.

Description

Optical element surface damage identification method based on deep learning and image processing
Technical Field
The invention relates to the technical field of component surface damage identification, in particular to an optical component surface damage identification method based on deep learning and image processing.
Background
The laser damage of the optical element is always a key bottleneck for restricting the stable operation of the high-power solid laser device, on one hand, the damage weakens the material performance of the element, so that the damage is easier to occur and expand, the service life of the optical element is seriously reduced, and on the other hand, the laser transmission is influenced and the damage to the downstream element is caused. Studies have shown that the growth rate of optical element lesions is slower in the initial stage, and the number and size of lesions gradually increase as the number of strong laser shots increases. When the damage to the component reaches a certain level, the damage can grow dramatically and eventually lead to component rejection. Therefore, repairing the element in time after the element is damaged has an important effect on prolonging the service life of the element. At present, CO is commonly adopted in engineering 2 The laser is used for carrying out local repair on fused quartz damage, and the accurate information of the damage on the surface of the element is obtained by a proper detection means, which is the premise of carrying out laser repair.
The elements inevitably introduce contaminants during transportation and installation, which are tiny in size, different in shape and have imaging characteristics similar to those of the damage, and interfere with detection and repair of the damage point, so that proper methods for distinguishing the damage from the contaminants are required. The early recognition process is mainly completed manually, target points are recognized one by means of experience of operators, and the mode is low in efficiency and high in error rate, and cannot meet engineering requirements of mass repair. With the development of machine vision and image processing technology, a method of collecting a target image, extracting image features and performing classification and identification by using a classifier is widely adopted. The method can reduce human participation to a certain extent, but has low accuracy, poor anti-interference capability and strict requirements on detection conditions, and cannot adapt to the actual working condition of large-caliber element repair.
Disclosure of Invention
In view of the above problems, the invention provides an optical element surface damage identification method based on deep learning and image processing, which is used for solving the problem of low accuracy of identifying the surface damage of a large-caliber element in the prior art.
The method for identifying the surface damage of the optical element based on deep learning and image processing comprises the following steps:
step one, acquiring a plurality of microscopic images containing target points on the surface of an element to form an image data set; the element surface target point includes a defective region and a pseudo-defective region;
step two, preprocessing an image data set;
inputting the preprocessed image data set into a damage recognition model based on a deep neural network for training to obtain a trained damage recognition model;
and step four, inputting the image to be detected containing the target point on the surface of the element into a trained damage recognition model to obtain a recognition result.
Further, in the step one, each target point in the image dataset correspondingly includes a plurality of images acquired under different light sources.
Further, in step one, the different light sources include a backlight source, a ring light source, and a coaxial light source.
Further, the preprocessing in the second step includes: labeling the image data, labeling an image containing a defect area as 1, and labeling an image containing a pseudo defect area as 0; dividing the image data to obtain an image of a target point and the pixel size of the target point; performing data enhancement on the image dataset; and carrying out RGB synthesis on a plurality of images acquired by a single target point under different light sources according to the sequence of red, green and blue channels, and obtaining a synthesized image.
Further, the step two of performing data enhancement on the image data set in the preprocessing includes: uniformly taking a plurality of points on the contour line of the target point after the segmentation processing, taking the contour points as the centers, taking the contour line external square as the intercepting size to intercept the target point, and filling the intercepted target point image into the original image data; and (3) performing data enhancement of overturn, rotation and noise disturbance on the image data.
Further, the specific steps of the third step include:
dividing the preprocessed image dataset according to a size range based on the pixel size of a target point;
step three, dividing image data corresponding to a plurality of size ranges into a training set and a verification set according to a proportion;
thirdly, inputting the training set into a damage identification model based on a deep neural network ResNeXt for training; the method specifically comprises the following steps: firstly, carrying out convolution operation on an input image through a preprocessing layer consisting of a plurality of convolution kernels to obtain an RGB image subjected to color space change processing; then, migrating and loading ResNeXt model parameters obtained by pre-training under an ImageNet data set, dividing an RGB image subjected to color space change treatment into a plurality of groups, respectively entering four convolution groups, and outputting predicted values of defects and pseudo defects through a self-adaptive average pooling layer and a full-connection layer; the four convolution groups use residual structures and grouping convolution, and the residual structures are connected in a jumping mode; calculating errors between the predicted value and the true value in the training process, and reversely transmitting the errors to each layer of the model to adjust model parameters;
and thirdly, inputting the verification set into the model after each training to adjust the model hyper-parameters, and evaluating the model prediction accuracy rate until the model prediction accuracy rate is not improved, and stopping training to obtain a trained damage prediction model.
Further, in step three, the error between the predicted value and the true value is calculated using the following cross entropy function as the loss function:
in the method, in the process of the invention,a confidence probability indicating that the predicted sample is a defect; y represents the label of the sample data.
Further, in the third and fourth steps, whether the model prediction accuracy reaches a preset threshold is evaluated by using the following formula:
wherein TP, FP, FN, TN represents the number of true positives, false positives, true negatives, and false negatives, respectively, in the predicted result.
Further, in the third and fourth steps, whether the model prediction accuracy reaches the expected effect is evaluated by using a Grad-CAM algorithm, which specifically comprises the following steps: taking the average value of the prediction probability of the belonging category relative to the partial derivative of the last layer of the feature layer of the model as the weight of the feature layer, weighting the feature layer by using the weight and linearly combining to obtain a category activation thermodynamic diagram, and if the part with high pixel value in the category activation thermodynamic diagram is concentrated in a target point area and the pixel value in a background area is low, indicating that the model prediction achieves the expected effect; the class activation thermodynamic diagram calculation formula is as follows:
wherein, reLU represents an activation function;weight representing that kth feature map belongs to class C, A k Representing the activation value of the kth feature map.
Further, in the fourth step, the image to be detected including the target point on the surface of the element is an image processed by: respectively carrying out segmentation processing on three images collected under a backlight source, an annular light source and a coaxial light source to obtain three target point images; and (3) carrying out RGB synthesis on the three target point images according to the sequence of red, green and blue channels, and obtaining a synthesized image.
The beneficial technical effects of the invention are as follows:
the invention provides an automatic acquisition and labeling method for the surface defects and pollutant data of the optical element, which improves the acquisition efficiency of a data set; the image processing is utilized to realize the interception and data enhancement of the target point area, so that the attention of the model is focused on the target point area; the three light source synthesized images are used as training and prediction basis, so that the classification accuracy of the model is improved; and constructing a damage prediction model based on ResNeXt, introducing transfer learning into a model training process, and verifying the effectiveness of the model. The method realizes the construction of the damage prediction model and provides technical support for the automatic detection and repair of the damage points of the large-caliber element.
Drawings
The invention may be better understood by reference to the following description taken in conjunction with the accompanying drawings, which are included to provide a further illustration of the preferred embodiments of the invention and to explain the principles and advantages of the invention, together with the detailed description below.
FIG. 1 is a schematic overall flow chart of a method for identifying surface damage of an optical element according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an image data acquisition device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an image preprocessing process in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a damage prediction model structure based on ResNeXt and a training process thereof in an embodiment of the invention;
FIG. 5 is a schematic diagram of a pretreatment layer structure in an embodiment of the present invention;
FIG. 6 is an exemplary view of an image acquired under three light source conditions in an embodiment of the present invention;
FIG. 7 is a diagram showing an example of an amplification and synthesis process of image data in an embodiment of the present invention; wherein, figure (a) is image amplification; fig. (b) is image synthesis;
FIG. 8 is a thermodynamic diagram obtained using the Grad-CAM algorithm in an embodiment of the invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, exemplary embodiments or examples of the present invention will be described below with reference to the accompanying drawings. It is apparent that the described embodiments or examples are only implementations or examples of a part of the invention, not all. All other embodiments or examples, which may be made by one of ordinary skill in the art without undue burden, are intended to be within the scope of the present invention based on the embodiments or examples herein.
The invention provides an optical element surface damage identification method based on deep learning and image processing, which introduces the deep learning into the optical element surface damage identification, builds a prediction model for distinguishing damage and pollutants by means of a convolutional neural network, improves the prediction accuracy by means of data enhancement, migration learning and other methods, and verifies the effectiveness of the model. The method can realize automatic identification of the target point types, and effectively eliminates the interference of pollutants on the damage repair process.
The embodiment of the invention provides an optical element surface damage identification method based on deep learning and image processing, which mainly relates to the acquisition of a data set, data processing, construction of a damage identification model, model training, application and the like, and the whole flow diagram is shown in figure 1. Firstly, acquiring target points on the surface of an element to acquire damage and pollutant images; preprocessing the acquired image data, including eliminating background interference, concentrating a detection area on a target, and expanding the data to increase the number of training samples; dividing the data set into a training set and a verification set and training the built model; and then verifying the validity of the obtained model, and finally applying the model to damage identification. The method comprises the following specific steps:
and 1, automatically acquiring a data set.
According to the embodiment of the invention, a large amount of image data with labels is required for training the damage prediction model, and manual acquisition and labeling are very time-consuming. The invention designs an automatic data set acquisition and labeling method, which comprises the steps of firstly, using a dark field scanning camera to perform dark field detection on the surface of an element to obtain coordinate information of a surface target point; then, the defect points are moved to a microscopic detection station one by one according to a dark field detection result, and a microscopic camera is controlled to acquire target point images under different light sources; and positioning the defect points to the microscopic detection station one by one, blowing dust and wiping the defect points, and marking the defect points as pollutants, namely pseudo defects if the target points are removed, and otherwise, judging the defect points as defects.
The schematic diagram of the data acquisition device used is shown in fig. 2, and comprises a motion platform, a dark field scanning system and a microscopic detection system. The motion platform comprises X, Y, Z three motion axes, and the motion directions of X, Y, Z motion axes are respectively consistent with the X, Y, Z coordinate axis directions of a machine tool coordinate system; the motion platform can carry an optical large-caliber element to realize the movement along the X, Y axial direction, and carry a dark field scanning system and a microscopic detection system to realize the movement along the Z axial direction. The dark field scanning system consists of an area array camera, a double telecentric lens and an annular light source, the resolution of the defects which can be detected by the system is 9.78 mu m, and the undistorted detection in the range of 50mm multiplied by 50mm can be realized. The microscopic detection system consists of an area array camera, a microscopic lens and a three-light-source illumination system, wherein the defect resolution detectable by the microscopic camera is 0.63 mu m, microscopic detection in the range of 1.5mm multiplied by 1.3mm can be realized, the three-light-source illumination system comprises a backlight source, an annular light source and a coaxial light source, and the automatic switching of the light sources can be realized through a light source controller. The automatic acquisition of the data set comprises the following specific steps:
step 1-1: and controlling the motion platform and the dark field scanning system to carry out dark field scanning photographing on the surface of the optical element, wherein the size of the optical element is 430mm multiplied by 430mm, and 9 multiplied by 9 subgraphs are needed to realize full-caliber scanning. Processing the obtained dark field image to obtain pixel coordinates of a target point, and calculating machine tool coordinates of the defect point positioned to a microscopic detection system according to the position of the subgraph and the pixel coordinates of the defect point in the subgraph;
step 1-2: positioning the defect points to a microscopic station one by one, sequentially starting an annular light source, a coaxial light source and a backlight light source, controlling a microscopic camera to collect target point images under different light sources, and storing an image file according to the naming format of 'element number_defect number_illumination mode_defect size';
step 1-3: after all the target point images are acquired, the target points are repositioned one by one, dust blowing and wiping treatment are carried out on the microscopic region, and microscopic view images are observed. If the processed target point is removed, marking the image as a pollutant, namely a pseudo defect, and otherwise marking the image as a defect; the image containing the defect is marked as 1, and the image containing the pseudo defect is marked as 0.
And 2, preprocessing the image data.
According to an embodiment of the present invention, the image preprocessing process includes target point region extraction, data amplification, and image synthesis, and the process is shown in fig. 3. The image collected by the microscopic camera contains a large amount of background information, and in order to concentrate the attention of the model on a target area, the microscopic image is processed, and the circumscribed positive rectangle of the target area is extracted. The target presents different characteristics under different light sources, and in order to improve the model prediction accuracy, target point images under different light sources are synthesized and used as training samples. To expand the number of samples, different areas of the target point are truncated and filled into the dataset. The specific process is as follows:
step 2-1: and (3) carrying out segmentation processing on the microscopic image to obtain an image of the target point area, solving the minimum circumscribed rectangle of the area, and intercepting the rectangle according to the rectangle. Since the predictive model requires an input image size of 224 x 224, the image needs to be adapted to the model requirements when truncated. In order to avoid the shape change of a target point caused by image size adjustment, taking the center of an external rectangle as the center, taking the long side of the rectangle as the side length, intercepting a square area as a target area, applying the square area to other light source images to obtain the image of the target point area under three light sources, and uniformly scaling the size of the image to 224 multiplied by 224;
step 2-2: and uniformly taking three points on the target contour, taking the contour points as centers, taking the acquired target area as an intercepting area to intercept the target point, and filling the intercepted image into the original data. In addition, enhancement modes such as overturning, rotating, noise disturbance and the like are randomly adopted on the image in the model training stage so as to improve the generalization capability of the model;
step 2-3: and synthesizing target point images under three illumination conditions of annular light, coaxial light and back light sequentially according to the sequence of Red, green, blue three channels, wherein the synthesized images simultaneously have target characteristics under three illumination conditions, and the images are used as final training and prediction images.
And 3, dividing the data set into a training set and a verification set, wherein the training set is used for fitting the model, and the verification set is used for adjusting the model super-parameters and evaluating the model prediction capability.
According to the embodiment of the invention, the size range of the target point, namely the defect area, is concentrated to 50-1000 mu m, and the size span is large, so that the defect area needs to be divided according to the size of the defect area, and the data on the training set and the test set can cover the defect areas in all size ranges. The specific process of data division is as follows:
step 3-1: the defect and contaminant data are divided according to the actual size ranges of 50 μm-100 μm, 100 μm-300 μm, 300 μm-600 μm, 600 μm-1000 μm, respectively. And 2-1, capturing an image of the target point area to obtain the pixel size of the target point, calculating to obtain the actual size, and then manually dividing according to different size ranges.
Step 3-2: images were randomly extracted from defect and contaminant data for each size range at a 4:1 ratio and placed in the training set and validation set, respectively. To ensure the reliability of the verification process, the original image and the expanded image are ensured to be placed in the same data set when data extraction is performed.
And 4, building a damage identification model based on the deep neural network ResNeXt and training the model.
According to the embodiment of the invention, as the training data is a multi-light source composite image, a preprocessing layer is added to preprocess the image before the image is transmitted into the ResNeX model. The retraining of the model is less effective due to the smaller number of training samples. According to the invention, transfer learning is introduced into a model training process, a pre-training model is loaded first, and then the model parameters are finely adjusted by using a defect and pollutant data set, so that model training is realized. After each round of training is completed, the training is evaluated by using a verification set, and the training is stopped when the model accuracy is not improved any more. The structure of the damage prediction model and the training process thereof are shown in fig. 4, and the specific process is as follows:
and 4-1, using a ResNeXt model as a base model of the damage prediction model. After the original image is input into the model, the original image is firstly subjected to a convolution layer with the convolution kernel size of 7 multiplied by 7, then is divided into 32 groups to enter four convolution groups from conv2 to conv5, and finally, the prediction values of defects and pseudo defects are output through a self-adaptive average pooling layer and a full connection layer. Residual structures and grouping convolution are used in conv2-conv 5, jump connection is used for the residual structures, the gradient disappearance problem caused by the increase of the depth of the convolutional neural network is relieved, and the grouping convolution can prevent the overfitting of a specific data set under the condition that the parameter number is unchanged.
Step 4-2: and adding a pretreatment layer in the model, and enabling the synthesized image to enter a ResNeXt base model after being treated by the pretreatment layer. Since the input RGB image is artificially synthesized and does not belong to natural images, a pre-processing layer is added to the model to eliminate the adverse effects of such hard synthesis. As shown in fig. 5, the preprocessing layer structure performs convolution operation on an input image with an input size of 224×224×3 by sequentially passing through 16 convolution kernels of 1×1×3 and 3 convolution kernels of 1×1×16, and the convolution operation of 1×1 only processes different channels of the same pixel point, does not involve adjacent pixels and does not change the input image size, and by this process, an RGB image subjected to color space change processing can be obtained.
Step 4-3: resNeXt model parameters trained under the ImageNet dataset are loaded into the present model. The ImageNet data set comprises 1000 classes of millions of images, the pre-trained ResNeXt model parameters are obtained by training by using data in the ImageNet, and the model parameters are migrated into the model of the invention, so that the model training speed can be improved, and the overfitting phenomenon caused by too little true and false defect data is avoided. Because the shallow receptive field of the convolutional neural network model is small, the model obtains general detailed features such as textures, outlines, edges, shapes and the like, and the features have small relevance to image categories, so that the migration parameters at the front end of the model are adjusted by using a small learning rate; the deeper layer of the model has larger receptive field, the acquired semantic features mainly related to the image category are larger in difference between the ImageNet data set and the defect data set, and the parameters at the rear end of the model are adjusted by using larger learning rate. Specifically, the invention adjusts the learning rate of 0.001 for the model shallower conv2-conv3 and adjusts the learning rate of 0.01 for the model deeper conv4-conv 5.
Step 4-4: the training set is input into the model to train the model, errors between the predicted value and the true value of the model are calculated, the errors are reversely propagated into each layer of the model to adjust the model parameters, and the model is evaluated by using the verification set while training. And stopping training until the model converges or the expected effect is reached, wherein the model is the final damage prediction model. In the training phase of the model, the error between the predicted value and the true value is calculated using the cross entropy function shown in the formula (1) as a loss function, and a small-batch momentum gradient descent method shown in the formula (2) is used as an optimization function. In the model evaluation stage, the classification model is evaluated by using the accuracy shown in the formula (3), and the expected effect is achieved when the classification accuracy Acc reaches a preset threshold.
Wherein,a confidence probability that the representative model prediction sample is a defect; y is the label of the sample, the defect is 1, and the pseudo defect is 0.
Wherein ω is the parameter that is optimized; alpha is a momentum coefficient; η is the learning rate; m is the number of samples for batch training.
Wherein TP, FP, FN, TN represents the number of true positives, false positives, true negatives, and false negatives, respectively, in the predicted result.
And 5, verifying the validity of the model.
According to the embodiment of the invention, the convolutional neural network model is an end-to-end model, a prediction result is directly output after the target point image is input into the model, all the operations in the middle are contained in the convolutional neural network, and the decision basis is difficult to present. In order to prevent the model from being classified by adopting the wrong basis, the invention uses the Grad-CAM algorithm to carry out the visual operation on the model, and the method is used for obtaining the influence degree of different areas of the image on the final decision of the model to verify the effectiveness of the model.
The Grad-CAM algorithm takes the average value of the prediction probability of the belonging category relative to the partial derivative of the last feature layer of the model as the weight of the feature layer, and the process is shown as a formula (4). The feature layers are weighted and linearly combined using the weights to obtain a class activation thermodynamic diagram as shown in equation (5). The image can reflect the influence degree of different areas of the image on the final decision of the model, and the higher the pixel value is, the higher the proportion of the area in the final decision is. If the high pixel value portion of the thermodynamic diagram L is concentrated in the target point region and the background region pixel value is low, this indicates that the model is valid.
In the method, in the process of the invention,representing the kth feature map genusWeights in category C, y c Output value of representative model for class c, A k Representing the activation value of the kth feature map.
Where ReLU represents the activation function.
Another embodiment of the present invention provides an example analysis of an optical element surface damage recognition method based on deep learning and image processing, and the example analysis is described by taking a training and application process of a damage recognition model for a certain time as an example, and the specific process is as follows:
1. training of lesion recognition models
(1) And positioning the target point to a microscopic field according to the dark field detection result, acquiring target point images under different illumination conditions through a microscopic camera, and labeling the images. Microscopic images under the three-light source condition shown in fig. 6 can be obtained by controlling the ring light RL, the coaxial light CL, and the backlight BL to sequentially irradiate the target region. The type of the image can be judged by carrying out dust blowing, wiping and other treatments on the target point, so that the data marking is realized. In total, 1117 pieces of surface target points, namely 444 pieces of defect points and 673 pieces of pseudo defects, of the optical elements are collected.
(2) The data is preprocessed. Taking ID-727 as an example, first, an circumscribed square of the target area is extracted, and the square is used to intercept images under three kinds of illumination, as in the area (1) in fig. 7. Taking three points on the outline of the target area as the center, taking the square area as the interception area, and intercepting the target point to obtain an amplified image of the target point, wherein the amplified image is shown in figures (2), (3) and (4). The original image and the amplified image are combined in the three-channel RGB order to form the final data Merge-RGB shown in FIG. 7.
(3) The data is partitioned. The data is divided by size, and the division results are shown in table 1. Randomly extracting images from each size range in a ratio of 4:1 to form a training set and a verification set.
Table 1 data statistics after processing
Size/μm 50-100 100-300 300-600 600-1000 Totalizing
Damage to 392 836 324 224 1776
Contaminants (S) 1880 612 120 80 2692
Totalizing 2272 1448 444 304 4468
(4) The model is trained. Training the model by using data in the training set, and evaluating the model by using data in the verification set to obtain an optimal damage classification model.
(5) And verifying the validity of the model. FIG. 8 is a thermodynamic diagram obtained using Grad-CAM algorithm, reflecting how much different regions of the image affect the final decision of the model, with closer redness indicating higher occupancy of that region in the final decision. From fig. 8, it can be seen that the basis for judging the image as both the defect and the pseudo defect falls on the target point, which indicates that the model has learned the characteristics of the defect and the pseudo defect. The accuracy of the model obtained in the training is 99.1%, and the engineering use requirement can be met.
2. Application of damage prediction model
(1) And loading the trained optimal damage prediction model.
(2) And (3) moving the defect point to a microscopic field according to the dark field detection result, and switching the light sources to obtain microscopic images under the three light sources.
(3) And intercepting the target area, synthesizing the target area into an RGB image, and inputting the RGB image into a damage prediction model to finish the discrimination of the true and false defects.
The invention realizes the identification of the surface damage of the optical element through the process, and further improves the automatic detection level of the large-caliber optical element.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (8)

1. The method for identifying the surface damage of the optical element based on deep learning and image processing is characterized by comprising the following steps of:
step one, acquiring a plurality of microscopic images containing target points on the surface of an element to form an image data set; the element surface target point includes a defective region and a pseudo-defective region;
step two, preprocessing an image data set; the pretreatment comprises the following steps: labeling the image data, labeling an image containing a defect area as 1, and labeling an image containing a pseudo defect area as 0; dividing the image data to obtain an image of a target point and the pixel size of the target point; performing data enhancement on the image dataset; RGB synthesis is carried out on a plurality of images collected by a single target point under different light sources according to the sequence of red, green and blue channels, and synthesized images are obtained;
inputting the preprocessed image data set into a damage recognition model based on a deep neural network for training to obtain a trained damage recognition model; the method comprises the following specific steps:
dividing the preprocessed image dataset according to a size range based on the pixel size of a target point;
step three, dividing image data corresponding to a plurality of size ranges into a training set and a verification set according to a proportion;
thirdly, inputting the training set into a damage identification model based on a deep neural network ResNeXt for training; the method specifically comprises the following steps: firstly, carrying out convolution operation on an input image through a preprocessing layer consisting of a plurality of convolution kernels to obtain an RGB image subjected to color space change processing; then, migrating and loading ResNeXt model parameters obtained by pre-training under an ImageNet data set, dividing an RGB image subjected to color space change treatment into a plurality of groups, respectively entering four convolution groups, and outputting predicted values of defects and pseudo defects through a self-adaptive average pooling layer and a full-connection layer; the four convolution groups use residual structures and grouping convolution, and the residual structures are connected in a jumping mode; calculating errors between the predicted value and the true value in the training process, and reversely transmitting the errors to each layer of the model to adjust model parameters;
inputting the verification set into the model after each training to adjust the model hyper-parameters, and evaluating the model prediction accuracy rate until the model prediction accuracy rate is not improved, and stopping training to obtain a trained damage prediction model;
and step four, inputting the image to be detected containing the target point on the surface of the element into a trained damage recognition model to obtain a recognition result.
2. The method for recognizing surface damage of optical element based on deep learning and image processing according to claim 1, wherein each target point in the image dataset correspondingly comprises a plurality of images collected under different light sources in step one.
3. The method for recognizing surface damage of optical element based on deep learning and image processing according to claim 2, wherein the different light sources in the first step include a backlight source, a ring-shaped light source and a coaxial light source.
4. A method for identifying surface damage of an optical element based on deep learning and image processing as recited in claim 3, wherein the data enhancement of the image dataset in the preprocessing in step two includes: uniformly taking a plurality of points on the contour line of the target point after the segmentation processing, taking the contour points as the centers, taking the contour line external square as the intercepting size to intercept the target point, and filling the intercepted target point image into the original image data; and (3) performing data enhancement of overturn, rotation and noise disturbance on the image data.
5. The method for recognizing surface damage of optical element based on deep learning and image processing according to claim 1, wherein in the third step, the error between the predicted value and the true value is calculated using the following cross entropy function as the loss function:
in the method, in the process of the invention,a confidence probability indicating that the predicted sample is a defect; y represents the label of the sample data.
6. The method for recognizing surface damage of optical element based on deep learning and image processing according to claim 1, wherein in the third and fourth steps, whether the model prediction accuracy reaches a preset threshold is evaluated by using the following formula:
wherein TP, FP, FN, TN represents the number of true positives, false positives, true negatives, and false negatives, respectively, in the predicted result.
7. The method for recognizing surface damage of optical element based on deep learning and image processing according to claim 1, wherein in the third and fourth steps, the method for evaluating whether the model prediction accuracy reaches the expected effect by using the Grad-CAM algorithm comprises the following steps: taking the average value of the prediction probability of the belonging category relative to the partial derivative of the last layer of the feature layer of the model as the weight of the feature layer, weighting the feature layer by using the weight and linearly combining to obtain a category activation thermodynamic diagram, and if the part with high pixel value in the category activation thermodynamic diagram is concentrated in a target point area and the pixel value in a background area is low, indicating that the model prediction achieves the expected effect; the class activation thermodynamic diagram calculation formula is as follows:
wherein, reLU represents an activation function;weight representing that kth feature map belongs to class C, A k Representing the activation value of the kth feature map.
8. The method for recognizing surface damage of optical element based on deep learning and image processing according to claim 3, wherein the image to be detected including the target point on the surface of the element in the fourth step is an image processed by: respectively carrying out segmentation processing on three images collected under a backlight source, an annular light source and a coaxial light source to obtain three target point images; and (3) carrying out RGB synthesis on the three target point images according to the sequence of red, green and blue channels, and obtaining a synthesized image.
CN202111428135.4A 2021-11-29 2021-11-29 Optical element surface damage identification method based on deep learning and image processing Active CN114120317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111428135.4A CN114120317B (en) 2021-11-29 2021-11-29 Optical element surface damage identification method based on deep learning and image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111428135.4A CN114120317B (en) 2021-11-29 2021-11-29 Optical element surface damage identification method based on deep learning and image processing

Publications (2)

Publication Number Publication Date
CN114120317A CN114120317A (en) 2022-03-01
CN114120317B true CN114120317B (en) 2024-04-16

Family

ID=80370673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111428135.4A Active CN114120317B (en) 2021-11-29 2021-11-29 Optical element surface damage identification method based on deep learning and image processing

Country Status (1)

Country Link
CN (1) CN114120317B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713653B (en) * 2022-11-10 2023-10-10 中国铁塔股份有限公司黑龙江省分公司 Method for identifying damaged position image of tower mast structure
CN116051541B (en) * 2023-03-06 2023-07-18 杭州深度视觉科技有限公司 Bearing end face gentle abrasion detection method and device based on stroboscopic light source
CN117496274B (en) * 2023-12-29 2024-06-11 墨卓生物科技(浙江)有限公司 Classification counting method, system and storage medium based on liquid drop images
CN117523343B (en) * 2024-01-08 2024-03-26 信熙缘(江苏)智能科技有限公司 Automatic identification method for trapezoid defects of wafer back damage

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559298A (en) * 2018-11-14 2019-04-02 电子科技大学中山学院 Emulsion pump defect detection method based on deep learning
CN109800708A (en) * 2018-12-13 2019-05-24 程琳 Visit image lesion intelligent identification Method in aero-engine hole based on deep learning
CN110334760A (en) * 2019-07-01 2019-10-15 成都数之联科技有限公司 A kind of optical component damage detecting method and system based on resUnet
CN112580264A (en) * 2020-12-25 2021-03-30 中国人民解放军国防科技大学 BP neural network algorithm-based damage point size distribution prediction method and system
CN112580519A (en) * 2020-12-22 2021-03-30 中国科学院合肥物质科学研究院 Soybean damage identification method of deep learning model based on self-adaptive mixed feature recalibration

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559298A (en) * 2018-11-14 2019-04-02 电子科技大学中山学院 Emulsion pump defect detection method based on deep learning
CN109800708A (en) * 2018-12-13 2019-05-24 程琳 Visit image lesion intelligent identification Method in aero-engine hole based on deep learning
WO2020119103A1 (en) * 2018-12-13 2020-06-18 程琳 Aero-engine hole detection image damage intelligent identification method based on deep learning
CN110334760A (en) * 2019-07-01 2019-10-15 成都数之联科技有限公司 A kind of optical component damage detecting method and system based on resUnet
CN112580519A (en) * 2020-12-22 2021-03-30 中国科学院合肥物质科学研究院 Soybean damage identification method of deep learning model based on self-adaptive mixed feature recalibration
CN112580264A (en) * 2020-12-25 2021-03-30 中国人民解放军国防科技大学 BP neural network algorithm-based damage point size distribution prediction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大口径反射镜表面颗粒污染物暗场检测算法研究;尹朝阳;张德志;赵林杰;陈明君;程健;蒋晓东;苗心向;牛龙飞;;光学学报(第07期);全文 *

Also Published As

Publication number Publication date
CN114120317A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN114120317B (en) Optical element surface damage identification method based on deep learning and image processing
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN109724984B (en) Defect detection and identification device and method based on deep learning algorithm
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN112598672A (en) Pavement disease image segmentation method and system based on deep learning
CN114627383B (en) Small sample defect detection method based on metric learning
CN116310785B (en) Unmanned aerial vehicle image pavement disease detection method based on YOLO v4
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN106934455B (en) Remote sensing image optics adapter structure choosing method and system based on CNN
CN110490842A (en) A kind of steel strip surface defect detection method based on deep learning
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN115170529A (en) Multi-scale tiny flaw detection method based on attention mechanism
CN114612406A (en) Photovoltaic panel defect detection method based on visible light and infrared vision
CN113222901A (en) Method for detecting surface defects of steel ball based on single stage
CN114445397A (en) Strip steel defect detection method based on shallow neural network
CN116071294A (en) Optical fiber surface defect detection method and device
CN117576038A (en) Fabric flaw detection method and system based on YOLOv8 network
CN116109840B (en) Cherry spore identification method based on machine vision
CN115830302B (en) Multi-scale feature extraction fusion power distribution network equipment positioning identification method
CN114120318B (en) Dark field image target point accurate extraction method based on integrated decision tree
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN116958073A (en) Small sample steel defect detection method based on attention feature pyramid mechanism
CN116645351A (en) Online defect detection method and system for complex scene
CN116740572A (en) Marine vessel target detection method and system based on improved YOLOX

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant