CN113706496A - Aircraft structure crack detection method based on deep learning model - Google Patents

Aircraft structure crack detection method based on deep learning model Download PDF

Info

Publication number
CN113706496A
CN113706496A CN202110970084.1A CN202110970084A CN113706496A CN 113706496 A CN113706496 A CN 113706496A CN 202110970084 A CN202110970084 A CN 202110970084A CN 113706496 A CN113706496 A CN 113706496A
Authority
CN
China
Prior art keywords
feature map
crack
deep learning
learning model
comparison
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110970084.1A
Other languages
Chinese (zh)
Other versions
CN113706496B (en
Inventor
吕帅帅
王彬文
杨宇
王叶子
李嘉欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVIC Aircraft Strength Research Institute
Original Assignee
AVIC Aircraft Strength Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVIC Aircraft Strength Research Institute filed Critical AVIC Aircraft Strength Research Institute
Priority to CN202110970084.1A priority Critical patent/CN113706496B/en
Publication of CN113706496A publication Critical patent/CN113706496A/en
Application granted granted Critical
Publication of CN113706496B publication Critical patent/CN113706496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application belongs to the field of structural health monitoring, and particularly relates to an aircraft structure crack detection method based on a deep learning model. The method comprises the following steps: the method for constructing the deep learning model comprises the following steps: the suspected crack feature extraction module is used for extracting a feature map containing a suspected crack area from the image to be detected and acquiring coordinate information of the suspected crack area; the comparison feature extraction module is used for extracting a feature map of a corresponding region from the template image without the crack according to the coordinate information of the suspected crack region; the crack judging module is used for comparing the characteristic diagram output by the suspected crack characteristic extracting module with the characteristic diagram output by the comparison characteristic extracting module and judging whether a suspected crack area has cracks or not; secondly, deep learning model training is carried out; and step three, detecting the structural cracks of the aircraft. The method and the device can reduce the influence of interference factors on the crack detection accuracy rate, and realize accurate, quick and real-time identification and early warning of the fatigue cracks of the aircraft structure.

Description

Aircraft structure crack detection method based on deep learning model
Technical Field
The application belongs to the field of structural health monitoring, and particularly relates to an aircraft structure crack detection method based on a deep learning model.
Background
Metal cracks are a common form of damage to aerospace structures. The damage is timely found and early warned in the fatigue test process of the aviation structure, weak links of the design of the aviation structure can be exposed, the strength and the integrity of the supporting structure are evaluated, and meanwhile, a basis is provided for compiling an aviation structure maintenance manual. At present, the detection means of the cracks in the full-size fatigue test of the airplane mainly comprise visual inspection, eddy current, ultrasonic and the like, the methods have strong dependence on expert experience, and due to the problems of complex test environment, large detection risk in the test loading process, difficult limited space operation and the like, the crack detection has the problems of high labor cost, long time consumption, low detection reliability and the like. Therefore, the realization of automatic and intelligent high-reliability detection of the aviation structure cracks is a problem which needs to be solved at present in full-size fatigue tests of airplanes.
With the deep development of the robot technology and the artificial intelligence technology in more than ten years and the application of the robot technology and the artificial intelligence technology in the civil field, the machine vision provides a new solution for the automatic detection of cracks in the airplane fatigue test. High-definition images of monitored parts are obtained through a high-precision motion system (such as a crawling robot and a mechanical arm) and an industrial camera, then, crack automatic identification is carried out by applying a target detection algorithm, damage early warning is executed, and adverse effects of labor in the aspects of cost, instantaneity, dangerousness and the like can be greatly reduced.
A deep learning target detection algorithm represented by a Faster-region Convolutional Neural Network (fast-RCNN for short) has been widely used for object recognition at present because of its advantages of rapidness and high accuracy. However, in the aircraft structure fatigue test, due to the complexity of the environment, the detection area is very easy to have the interferences such as fouling, scratches and the like with high similarity to the crack characteristics, so that the existing target detection algorithm is directly used for crack detection, and the higher misjudgment rate exists, thereby further influencing the fatigue test progress.
Accordingly, a technical solution is desired to overcome or at least alleviate at least one of the above-mentioned drawbacks of the prior art.
Disclosure of Invention
The application aims to provide an aircraft structure crack detection method based on a deep learning model so as to solve at least one problem existing in the prior art.
The technical scheme of the application is as follows:
an aircraft structure crack detection method based on a deep learning model comprises the following steps:
the method comprises the following steps of constructing a deep learning model, wherein the deep learning model comprises the following steps:
the suspected crack feature extraction module is used for extracting a feature map containing a suspected crack area from the image to be detected and acquiring coordinate information of the suspected crack area;
the comparison feature extraction module is used for extracting a feature map of a corresponding region from the template image without the crack according to the coordinate information of the suspected crack region;
the crack judging module is used for comparing the characteristic diagram output by the suspected crack characteristic extracting module with the characteristic diagram output by the comparison characteristic extracting module and judging whether a suspected crack area has cracks or not;
secondly, deep learning model training is carried out;
and thirdly, detecting the structural cracks of the aircraft through the deep learning model.
In at least one embodiment of the present application, the suspected crack feature extraction module includes an image input unit to be detected, a monitoring area calibration network unit, a basic feature extraction network unit, a suspected crack feature map unit, an area recommendation network unit, and a suspected crack recommendation frame unit, where,
the image input unit to be detected is used for inputting an image, wherein,
in the second step, when deep learning model training is carried out, the image input unit to be detected is used for inputting cracked images;
in the third step, when the aircraft structure crack detection is carried out through the deep learning model, the image input unit to be detected is used for inputting an image to be detected;
the monitoring area calibration network unit is used for calibrating the monitoring area of the image input by the image input unit to be detected;
the basic feature extraction network unit is used for extracting basic features from a monitored area;
the suspected crack feature map unit is used for extracting a feature map containing a monitoring area with basic features;
the area suggestion network unit is used for acquiring coordinate information of a monitoring area with basic characteristics;
the suspected crack recommending frame unit is used for generating a recommending frame feature map according to the feature map containing the monitoring area with the basic features.
In at least one embodiment of the present application, the comparison feature extraction module includes a template image input unit, a monitoring area calibration network unit, a basic feature extraction network unit, a comparison feature map unit, and a suspected crack comparison frame unit, wherein,
the template image input unit is used for inputting a crack-free template image, wherein,
in the second step, when deep learning model training is carried out, the template image input unit is used for inputting two crack-free template images;
in the third step, when the aircraft structure crack detection is carried out through the deep learning model, the template image input unit is used for inputting a crack-free template image;
the monitoring area calibration network unit is used for calibrating the monitoring area of the template image input by the template image input unit;
the basic feature extraction network unit is used for extracting basic features from a monitored area;
the comparison feature map unit is used for extracting a feature map containing a monitoring area with basic features;
the suspected crack comparison frame unit is used for extracting a corresponding feature map containing a monitoring area with basic features according to the coordinate information output by the area recommendation network unit and generating a comparison frame feature map according to the corresponding feature map containing the monitoring area with the basic features.
In at least one embodiment of the present application, in the second step, when performing deep learning model training, the comparison frame feature map generated in the suspected crack comparison frame unit includes a first comparison frame feature map generated based on a first template image and a second comparison frame feature map generated based on a second template image.
In at least one embodiment of the present application, the crack determination module comprises a recommended frame pooling network element, a data combining network element, and a classifying network element, wherein,
the recommendation frame pooling network unit is used for pooling the recommendation frame feature map and the comparison frame feature map;
the data combination network unit is used for rearranging and combining the recommended frame characteristic diagram and the comparison frame characteristic diagram according to the crack positions;
and the classification network unit is used for screening the recommendation frame characteristic graph with cracks from the rearranged and combined recommendation frame characteristic graph and the comparison frame characteristic graph.
In at least one embodiment of the present application, the recommended frame feature map and the comparison frame feature map after the recommended frame pooling network unit pooling process have the same size.
In at least one embodiment of the present application, in the second step, when deep learning model training is performed, rearranging and combining the recommended frame feature map and the comparison frame feature map according to crack positions in the data combination network unit specifically includes:
and combining the recommendation frame feature map, the first comparison frame feature map and the second comparison frame feature map of the same area into a triple.
In at least one embodiment of the present application, in step two, when deep learning model training is performed, the method for screening a cracked recommended frame feature map from the rearranged and combined recommended frame feature map and the comparison frame feature map in the classification network unit specifically includes:
extracting features of the triples through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing the 3 feature vectors in each triple, specifically: splicing the feature vectors of the recommended frame feature map and the first comparison frame feature map respectively, splicing the feature vectors of the first comparison frame feature map and the second comparison frame feature map, and obtaining two 256-dimensional splicing vectors by each triple;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the cracked recommended frame feature map as a classification result, sending the 128-dimensional feature vectors of the cracked recommended frame feature map into a regression layer, and predicting the crack positions.
In at least one embodiment of the present application, in step three, when the aircraft structure crack detection is performed through the deep learning model, rearranging and combining the recommended frame feature map and the comparison frame feature map according to the crack positions in the data combination network unit specifically includes:
and combining the recommended frame feature map and the comparison frame feature map of the same area into a binary group.
In at least one embodiment of the present application, in step three, when the aircraft structure crack detection is performed through the deep learning model, the recommended frame feature map with cracks screened from the rearranged and combined recommended frame feature map and the comparison frame feature map in the classification network unit specifically includes:
further extracting features of the binary group through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing 2 eigenvectors in each binary group specifically comprises the following steps: splicing the feature vectors of the recommendation frame feature map and the comparison frame feature map, wherein each binary group obtains a 256-dimensional splicing vector;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the cracked recommended frame feature map as a classification result, sending the 128-dimensional feature vectors of the cracked recommended frame feature map into a regression layer, and predicting the crack positions.
The invention has at least the following beneficial technical effects:
the aircraft structure crack detection method based on the deep learning model can reduce the influence of interference factors such as scratches and fouling on the crack detection accuracy rate, and realize accurate, quick and real-time identification and early warning of the aircraft structure fatigue cracks.
Drawings
FIG. 1 is an overall architecture of a deep learning model-based aircraft structure crack detection method according to an embodiment of the present application;
FIG. 2 is a flow chart of the operation of a data combining network element according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a classification network element structure according to an embodiment of the present application;
fig. 4 is a schematic diagram of feature concatenation of a classification network element to a triple according to an embodiment of the present application.
Wherein:
1-suspected crack feature extraction module; 2-a comparison feature extraction module; 3-a crack determination module; 4-an image input unit to be detected; 5-monitoring area calibration network unit; 6-basic feature extraction network element; 7-suspected crack signature graph unit; 8-regional advice network element; 9-suspected crack recommending box unit; 10-a template image input unit; 11-contrast feature map unit; 12-suspected crack contrast frame unit; 13-recommend frame pooling network elements; 14-a data combining network element; 15-classifying the network elements.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be described in more detail below with reference to the drawings in the embodiments of the present application. In the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The described embodiments are a subset of the embodiments in the present application and not all embodiments in the present application. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In the description of the present application, it is to be understood that the terms "center", "longitudinal", "lateral", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present application and for simplifying the description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and therefore should not be construed as limiting the scope of the present application.
The present application is described in further detail below with reference to fig. 1 to 4.
The application provides an aircraft structure crack detection method based on a deep learning model, which comprises the following steps: the device comprises a suspected crack feature extraction module 1, a comparison feature extraction module 2 and a crack judgment module 3.
As shown in fig. 1, the suspected crack feature extraction module 1 is configured to extract a feature map containing a suspected crack region from an image to be detected, and acquire coordinate information of the suspected crack region; the comparison feature extraction module 2 is used for extracting feature maps of corresponding regions from the two crack-free template images respectively according to the coordinate information of the suspected crack region; the crack determination module 3 is configured to compare the feature map output by the suspected crack feature extraction module 1 with the feature map output by the comparison feature extraction module 2, and determine whether a crack exists in the suspected crack region.
In a preferred embodiment of the present application, the suspected crack feature extraction module 1 includes an image input unit 4 to be detected, a monitoring area calibration Network unit 5, a basic feature extraction Network unit 6, a suspected crack feature map unit 7, an area recommendation Network (RPN) unit 8, and a suspected crack recommendation frame unit 9, where,
the image to be detected input unit 4 is used for inputting an image, wherein,
in the second step, when deep learning model training is carried out, the to-be-detected image input unit 4 is used for inputting cracked images;
in the third step, when the aircraft structure crack detection is carried out through the deep learning model, the image to be detected input unit 4 is used for inputting an image to be detected;
the monitoring area calibration network unit 5 is used for calibrating the monitoring area of the image input by the image input unit 4 to be detected;
the basic feature extraction network unit 6 is used for extracting basic features from the monitored area;
the suspected crack feature map unit 7 is used for extracting a feature map containing a monitoring area with basic features;
the area suggestion network unit 8 is used for acquiring coordinate information of a monitoring area with basic characteristics;
the suspected crack recommendation box unit 9 is configured to generate a recommendation box feature map according to a feature map including a monitoring area with basic features.
In this embodiment, the suspected crack feature map unit 7 can extract a plurality of feature maps including a monitoring area with basic features, and a plurality of recommendation frame feature maps are generated in the suspected crack recommendation frame unit 9.
In the preferred embodiment of the present application, the comparison feature extraction module 2 includes a template image input unit 10, a monitoring area calibration network unit 5, a basic feature extraction network unit 6, a comparison feature map unit 11, and a suspected crack comparison frame unit 12, wherein,
the template image input unit 10 is used to input a crack-free template image, in which,
in the second step, when deep learning model training is performed, the template image input unit 10 is used for inputting two crack-free template images;
in the third step, when detecting the aircraft structure crack through the deep learning model, the template image input unit 10 is used for inputting a crack-free template image;
the monitoring area calibration network unit 5 is used for calibrating the monitoring area of the template image input by the template image input unit 10;
the basic feature extraction network unit 6 is used for extracting basic features from the monitored area;
the comparison feature map unit 11 is used for extracting a feature map containing a monitoring area with basic features;
the suspected crack comparison frame unit 12 is configured to extract a corresponding feature map including a monitoring area with a basic feature according to the coordinate information output by the area recommendation network unit 8, and generate a comparison frame feature map according to the corresponding feature map including the monitoring area with the basic feature.
In this embodiment, the comparison feature map unit 11 extracts a plurality of feature maps including a monitoring region having a basic feature on each template image; in step two, when deep learning model training is performed, the comparison frame feature maps generated in the suspected crack comparison frame unit 12 include a plurality of first comparison frame feature maps generated based on the first template images and a plurality of second comparison frame feature maps generated based on the second template images.
Advantageously, in this embodiment, the suspected crack feature extraction module 1 and the comparison feature extraction module 2 share the monitoring area calibration network unit 5 and the basic feature extraction network unit 6.
In a preferred embodiment of the present application, the crack decision module 3 comprises a recommended frame pooling network element 13, a data combining network element 14 and a classifying network element 15, wherein,
the recommendation frame pooling network unit 13 is configured to perform pooling on the recommendation frame feature map and the comparison frame feature map;
the data combination network unit 14 is used for rearranging and combining the recommended frame feature map and the comparison frame feature map according to the crack positions;
the classification network unit 15 is configured to screen a recommended frame feature map with cracks from the rearranged and combined recommended frame feature map and the comparison frame feature map.
In this embodiment, the recommended frame feature map and the comparison frame feature map after the pooling process by the recommended frame pooling network unit 13 have the same size.
As shown in fig. 2, in the second step, when deep learning model training is performed, the rearranging and combining the recommended frame feature map and the comparison frame feature map according to the crack positions in the data combination network unit 14 specifically includes:
and combining the recommendation frame feature map, the first comparison frame feature map and the second comparison frame feature map of the same area into a triple.
As shown in fig. 3-4, in the second step, when deep learning model training is performed, the specific steps of screening the cracked recommended frame feature map from the rearranged and combined recommended frame feature map and the comparison frame feature map in the classification network unit 15 are:
extracting features of the triples through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing the 3 feature vectors in each triple, specifically: splicing the feature vectors of the recommended frame feature map and the first comparison frame feature map respectively, splicing the feature vectors of the first comparison frame feature map and the second comparison frame feature map, and obtaining two 256-dimensional splicing vectors by each triple;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the cracked recommended frame feature map as a classification result, sending the 128-dimensional feature vectors of the cracked recommended frame feature map into a regression layer, and predicting the crack positions.
In the third step, when the aircraft structure crack detection is performed through the deep learning model, the rearrangement and combination of the recommended frame feature map and the comparison frame feature map according to the crack position in the data combination network unit 14 specifically includes:
and combining the recommended frame feature map and the comparison frame feature map of the same area into a binary group.
In the third step, when detecting the aircraft structure crack through the deep learning model, the method specifically includes the following steps of screening the cracked recommended frame feature map from the rearranged and combined recommended frame feature map and the comparison frame feature map in the classification network unit 15:
further extracting features of the binary group through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing 2 eigenvectors in each binary group specifically comprises the following steps: splicing the feature vectors of the recommendation frame feature map and the comparison frame feature map, wherein each binary group obtains a 256-dimensional splicing vector;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the cracked recommended frame feature map as a classification result, sending the 128-dimensional feature vectors of the cracked recommended frame feature map into a regression layer, and predicting the crack positions.
According to the aircraft structure crack detection method based on the deep learning model, a comparison mechanism is introduced in a crack judgment stage by utilizing a suspected crack screening method in fast-RCNN and according to the actual problem of crack detection in a fatigue test. In the model, a suspected crack feature extraction module 1, a comparison feature extraction module 2 and a recommendation frame pooling network unit 13 extend the network architecture and design method of the corresponding modules in the fast-RCNN.
The aircraft structure crack detection method based on the deep learning model is divided into two stages of model training and model prediction after the model is initially constructed. In the model training stage, the model optimizes the parameters of the deep learning model through the learning of known crack-free images, and at the moment, the model works in a triple mode; in the model prediction stage, the deep learning model judges whether cracks exist in the real-time acquisition picture or not by comparing the characteristics of the real-time acquisition picture (cracks may exist) and the template picture (no cracks), and at the moment, the model works under a binary model.
The working principle of the present application is described below in the order of model training and model prediction.
In the model training stage, the input image in the image input unit 4 to be detected is a known cracked image, the input image in the template image input unit 10 is two images (named as 10-1 and 10-2) which have the same field of view as the input image in the image input unit 4 to be detected but different illumination and definition and have no cracks, the model extracts the recommended frame feature map and the corresponding comparison frame feature maps (divided into 12-1 and 12-2) of the input image through the basic feature extraction network unit 6, the suspected crack feature map unit 7, the area suggestion network unit 8, the suspected crack recommended frame unit 9, the comparison feature map unit 11 and the suspected crack comparison frame unit 12, then all the recommended frames and the comparison frames have the same size through the processing of the recommended frame pooling network unit 13, and a triple queue is formed by the group combination network unit 14, and then enters the classification network unit 15 to train the parameters thereof.
The training process of the model is divided into two stages of suspected crack feature extraction module 1 training and crack judgment module 3 training, namely, a batch of pictures are input, network parameters in the suspected crack feature extraction module 1 are trained, then the output of the suspected crack feature extraction module 1 is used as input, and the network parameters in the crack judgment module 3 are trained; and training the next group of pictures in the same way until the network converges. The loss function and the training method of the suspected crack feature extraction module 1 are the same as those of an RPN (resilient packet network) of fast-RCNN. The Loss function Loss _ total of the crack determination module 3 can be expressed as:
Loss_total=Triplet_Loss+Cross_Entropy_Loss+Regeression_Loss (1)
in the formula, triple _ Loss is a triple Loss, which can be specifically expressed as:
Figure BDA0003225624390000101
where N represents the total number of triples input to the classification network element 15, f () represents the 128-dimensional vectors of the recommended frame feature map and the comparison frame feature map extracted by the classification network element 15,
Figure BDA0003225624390000102
Figure BDA0003225624390000103
and the recommendation frame feature map, the first comparison frame feature map and the second comparison frame feature map of the ith triple are respectively shown.
The loss function has the effects of reducing the Euclidean distance between the characteristic vectors of the two comparison frames as much as possible, and simultaneously increasing the Euclidean distance between the characteristic vectors of the comparison frame and the characteristic vector of the recommended frame as much as possible, so that the 128-dimensional characteristic vector extracted by the model can represent the image difference caused by cracks, and is not easily influenced by factors such as light, definition and the like.
Cross _ Encopy _ Loss in the formula (1) is Cross Entropy Loss, which can be specifically expressed as:
Figure BDA0003225624390000104
where N represents the total number of triplets input to the classifying network element 15, g () represents the classification result output by the classifying network element 15, and g (), which is E [0, 1](0 indicates no cracking, 1 indicates cracking),
Figure BDA0003225624390000105
respectively representing 256-dimensional splicing vectors obtained by splicing the recommendation frame and the comparison frame 12-1 and the comparison frame 12-2 in the ith triple.
The function of the loss function is to optimize the network parameters of the obtained classification layer (fig. 3) with the aim of increasing the inter-class distance of the 256-dimensional features as much as possible.
The regression _ Loss in the formula (1) is the regression Loss of the crack position.
In the model prediction stage, the input image in the image input unit 4 to be detected is an image to be detected acquired in real time in a fatigue test, and the input image in the template image input unit 10 is a crack-free image which is acquired in the early stage of the test and has the same field of view as the image to be detected input in the image input unit 4 to be detected. The model firstly extracts a recommendation frame feature map and a corresponding comparison frame feature map, and then obtains a binary group composed of the two feature maps through the recommendation frame pooling network unit 13. And finally, two 128-dimensional feature vectors of the binary group are extracted from the classification network unit 15 and are spliced into 256-dimensional spliced vectors, and the 256-dimensional spliced vectors are sent to a classification layer to obtain a prediction result of the model.
The aircraft structure crack detection method based on the deep learning model realizes four aspects of comparison image acquisition method design, data processing flow design, classification network structure design and loss function design of a classification network, and specifically comprises the following steps:
1. based on the structure and design method of an RPN (resilient packet network) in the Faster-RCNN, two parallel networks are designed to extract images of the same detection position at different moments, and a recommended frame characteristic diagram and a comparison frame characteristic diagram of a suspected crack area are intercepted;
2. reordering all recommended frame feature maps and comparison frame feature maps in the batch processing data according to positions;
3. during deep learning model training, extracting 128-dimensional feature vectors of each recommendation frame feature map and each comparison frame feature map according to a triple sequence, performing feature splicing according to modes of recommendation frame-comparison frame and comparison frame 1-comparison frame 2, and sending the feature vectors into a classification network unit to perform classification;
4. and (4) considering factors such as the easiness of the target classification result on image quality influence, network convergence speed and the like, and designing a loss function of the classification network.
Compared with a common target detection algorithm in a large-view-field target detection task, the method for detecting the aircraft structure cracks based on the deep learning model introduces a contrast mechanism into the deep learning model according to the practical problem of crack detection in a fatigue test, solves the influence of interference factors such as scratches and stains on the detection accuracy rate, and provides a basis for realizing real-time and reliable damage early warning in the aircraft structure fatigue test; secondly, the directional characteristic of crack change is considered in the design of a comparison mechanism, triple loss is introduced into a common classification loss function, so that the crack characteristic and the characteristic of light ray and definition change can be distinguished by a model, the convergence speed of the model can be increased, and the influence of image quality change on the detection accuracy rate can be reduced.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An aircraft structure crack detection method based on a deep learning model is characterized by comprising the following steps:
the method comprises the following steps of constructing a deep learning model, wherein the deep learning model comprises the following steps:
the suspected crack feature extraction module (1) is used for extracting a feature map containing a suspected crack area from an image to be detected and acquiring coordinate information of the suspected crack area;
the comparison feature extraction module (2) is used for extracting a feature map of a corresponding region from the template image without the crack according to the coordinate information of the suspected crack region;
the crack judging module (3) is used for comparing the characteristic diagram output by the suspected crack characteristic extracting module (1) with the characteristic diagram output by the comparison characteristic extracting module (2) and judging whether a crack exists in a suspected crack area or not;
secondly, deep learning model training is carried out;
and thirdly, detecting the structural cracks of the aircraft through the deep learning model.
2. The aircraft structure crack detection method based on the deep learning model is characterized in that the suspected crack feature extraction module (1) comprises an image input unit (4) to be detected, a monitoring area calibration network unit (5), a basic feature extraction network unit (6), a suspected crack feature map unit (7), an area suggestion network unit (8) and a suspected crack recommendation frame unit (9), wherein,
the image input unit (4) to be detected is used for inputting an image, wherein,
in the second step, when deep learning model training is carried out, the image input unit (4) to be detected is used for inputting cracked images;
in the third step, when the aircraft structure crack detection is carried out through the deep learning model, the to-be-detected image input unit (4) is used for inputting an to-be-detected image;
the monitoring area calibration network unit (5) is used for calibrating the monitoring area of the image input by the image input unit (4) to be detected;
the basic feature extraction network unit (6) is used for extracting basic features from a monitored area;
the suspected crack feature map unit (7) is used for extracting a feature map containing a monitoring area with basic features;
the area suggestion network unit (8) is used for acquiring coordinate information of a monitoring area with basic characteristics;
the suspected crack recommendation box unit (9) is used for generating a recommendation box feature map according to the feature map containing the monitoring area with the basic features.
3. The aircraft structure crack detection method based on the deep learning model is characterized in that the comparison feature extraction module (2) comprises a template image input unit (10), a monitoring area calibration network unit (5), a basic feature extraction network unit (6), a comparison feature map unit (11) and a suspected crack comparison frame unit (12), wherein,
the template image input unit (10) is used for inputting a crack-free template image, wherein,
in the second step, when deep learning model training is carried out, the template image input unit (10) is used for inputting two crack-free template images;
in the third step, when the aircraft structure crack detection is carried out through the deep learning model, the template image input unit (10) is used for inputting a crack-free template image;
the monitoring area calibration network unit (5) is used for calibrating the monitoring area of the template image input by the template image input unit (10);
the basic feature extraction network unit (6) is used for extracting basic features from a monitored area;
the comparison feature map unit (11) is used for extracting a feature map containing a monitoring area with basic features;
the suspected crack comparison frame unit (12) is used for extracting a corresponding feature map containing a monitoring area with basic features according to the coordinate information output by the area recommendation network unit (8), and generating a comparison frame feature map according to the corresponding feature map containing the monitoring area with the basic features.
4. The aircraft structure crack detection method based on the deep learning model as claimed in claim 3, wherein in the second step, during deep learning model training, the comparison frame feature maps generated in the suspected crack comparison frame unit (12) comprise a first comparison frame feature map generated based on a first template image and a second comparison frame feature map generated based on a second template image.
5. The deep learning model-based aircraft structure crack detection method according to claim 4, characterized in that the crack decision module (3) comprises a recommended frame pooling network element (13), a data combining network element (14) and a classification network element (15), wherein,
the recommendation frame pooling network unit (13) is used for pooling the recommendation frame feature map and the comparison frame feature map;
the data combination network unit (14) is used for rearranging and combining the recommended frame characteristic diagram and the comparison frame characteristic diagram according to the crack positions;
and the classification network unit (15) is used for screening the recommendation frame feature map with cracks from the rearranged and combined recommendation frame feature map and the comparison frame feature map.
6. The aircraft structure crack detection method based on the deep learning model is characterized in that the recommended frame feature map and the comparison frame feature map after the pooling process of the recommended frame pooling network unit (13) have the same size.
7. The aircraft structure crack detection method based on the deep learning model as claimed in claim 6, wherein in the second step, during the deep learning model training, the data combination network unit (14) rearranges and combines the recommended frame feature map and the comparison frame feature map according to the crack positions specifically as follows:
and combining the recommendation frame feature map, the first comparison frame feature map and the second comparison frame feature map of the same area into a triple.
8. The aircraft structure crack detection method based on the deep learning model according to claim 7, wherein in the second step, when the deep learning model training is performed, the recommended frame feature map with cracks screened from the rearranged and combined recommended frame feature map and the comparison frame feature map in the classification network unit (15) is specifically:
extracting features of the triples through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing the 3 feature vectors in each triple, specifically: splicing the feature vectors of the recommended frame feature map and the first comparison frame feature map respectively, splicing the feature vectors of the first comparison frame feature map and the second comparison frame feature map, and obtaining two 256-dimensional splicing vectors by each triple;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the cracked recommended frame feature map as a classification result, sending the 128-dimensional feature vectors of the cracked recommended frame feature map into a regression layer, and predicting the crack positions.
9. The aircraft structure crack detection method based on the deep learning model according to claim 6, wherein in step three, when the aircraft structure crack detection is performed through the deep learning model, the rearranging and combining of the recommended frame feature map and the comparison frame feature map according to the crack positions in the data combination network unit (14) is specifically:
and combining the recommended frame feature map and the comparison frame feature map of the same area into a binary group.
10. The aircraft structure crack detection method based on the deep learning model according to claim 9, wherein in step three, when the aircraft structure crack detection is performed by the deep learning model, the recommended frame feature map with cracks screened from the rearranged and combined recommended frame feature map and the comparison frame feature map in the classification network unit (15) is specifically:
further extracting features of the binary group through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing 2 eigenvectors in each binary group specifically comprises the following steps: splicing the feature vectors of the recommendation frame feature map and the comparison frame feature map, wherein each binary group obtains a 256-dimensional splicing vector;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the cracked recommended frame feature map as a classification result, sending the 128-dimensional feature vectors of the cracked recommended frame feature map into a regression layer, and predicting the crack positions.
CN202110970084.1A 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model Active CN113706496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110970084.1A CN113706496B (en) 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110970084.1A CN113706496B (en) 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model

Publications (2)

Publication Number Publication Date
CN113706496A true CN113706496A (en) 2021-11-26
CN113706496B CN113706496B (en) 2024-04-12

Family

ID=78654184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110970084.1A Active CN113706496B (en) 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model

Country Status (1)

Country Link
CN (1) CN113706496B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115077832A (en) * 2022-07-28 2022-09-20 西安交通大学 Method for measuring vibration fatigue damage of three-dimensional surface of high-temperature-resistant component of airplane
CN116309557A (en) * 2023-05-16 2023-06-23 山东聚宁机械有限公司 Method for detecting fracture of track shoe of excavator

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009019A (en) * 2019-03-26 2019-07-12 苏州富莱智能科技有限公司 Magnetic material crackle intelligent checking system and method
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
KR102157610B1 (en) * 2019-10-29 2020-09-18 세종대학교산학협력단 System and method for automatically detecting structural damage by generating super resolution digital images
JP6807093B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method
JP6807092B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN110009019A (en) * 2019-03-26 2019-07-12 苏州富莱智能科技有限公司 Magnetic material crackle intelligent checking system and method
KR102157610B1 (en) * 2019-10-29 2020-09-18 세종대학교산학협력단 System and method for automatically detecting structural damage by generating super resolution digital images
JP6807093B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method
JP6807092B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
衣世东;: "基于深度学习的图像识别算法研究", 网络安全技术与应用, no. 01 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115077832A (en) * 2022-07-28 2022-09-20 西安交通大学 Method for measuring vibration fatigue damage of three-dimensional surface of high-temperature-resistant component of airplane
CN115077832B (en) * 2022-07-28 2022-11-08 西安交通大学 Method for measuring vibration fatigue damage of three-dimensional surface of high-temperature-resistant component of airplane
CN116309557A (en) * 2023-05-16 2023-06-23 山东聚宁机械有限公司 Method for detecting fracture of track shoe of excavator

Also Published As

Publication number Publication date
CN113706496B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN110598736B (en) Power equipment infrared image fault positioning, identifying and predicting method
CN110321923B (en) Target detection method, system and medium for fusion of different-scale receptive field characteristic layers
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN107437245B (en) High-speed railway contact net fault diagnosis method based on deep convolutional neural network
CN107742093B (en) Real-time detection method, server and system for infrared image power equipment components
KR102166458B1 (en) Defect inspection method and apparatus using image segmentation based on artificial neural network
US10621717B2 (en) System and method for image-based target object inspection
CN111814850A (en) Defect detection model training method, defect detection method and related device
CN107966454A (en) A kind of end plug defect detecting device and detection method based on FPGA
CN109840900A (en) A kind of line detection system for failure and detection method applied to intelligence manufacture workshop
CN113706496B (en) Aircraft structure crack detection method based on deep learning model
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
US20220405586A1 (en) Model generation apparatus, estimation apparatus, model generation method, and computer-readable storage medium storing a model generation program
JP2021515885A (en) Methods, devices, systems and programs for setting lighting conditions and storage media
CN113469950A (en) Method for diagnosing abnormal heating defect of composite insulator based on deep learning
CN112164048A (en) Magnetic shoe surface defect automatic detection method and device based on deep learning
CN116543247A (en) Data set manufacturing method and verification system based on photometric stereo surface reconstruction
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN116485802B (en) Insulator flashover defect detection method, device, equipment and storage medium
CN112750113A (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN114596244A (en) Infrared image identification method and system based on visual processing and multi-feature fusion
CN116843691A (en) Photovoltaic panel hot spot detection method, storage medium and electronic equipment
CN110610136A (en) Transformer substation equipment identification module and identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant