CN112381792A - Radar wave-absorbing coating/electromagnetic shielding film damage intelligent imaging online detection method based on deep learning - Google Patents

Radar wave-absorbing coating/electromagnetic shielding film damage intelligent imaging online detection method based on deep learning Download PDF

Info

Publication number
CN112381792A
CN112381792A CN202011273131.9A CN202011273131A CN112381792A CN 112381792 A CN112381792 A CN 112381792A CN 202011273131 A CN202011273131 A CN 202011273131A CN 112381792 A CN112381792 A CN 112381792A
Authority
CN
China
Prior art keywords
electromagnetic shielding
absorbing coating
shielding film
radar wave
sar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011273131.9A
Other languages
Chinese (zh)
Other versions
CN112381792B (en
Inventor
魏小龙
徐浩军
李益文
裴彬彬
何卫锋
武欣
李玉琴
张琳
化为卓
韩欣珉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Engineering University of PLA
Original Assignee
Air Force Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Engineering University of PLA filed Critical Air Force Engineering University of PLA
Priority to CN202011273131.9A priority Critical patent/CN112381792B/en
Publication of CN112381792A publication Critical patent/CN112381792A/en
Application granted granted Critical
Publication of CN112381792B publication Critical patent/CN112381792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30156Vehicle coating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an intelligent imaging online detection method for radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning, which comprises the following steps: acquiring a damaged and undamaged SAR two-dimensional image and a corresponding optical image of the radar wave-absorbing coating/electromagnetic shielding film; judging the position and shape of the optical image damage; processing the SAR two-dimensional image to obtain a training set and a test set; inputting the training set into YOLO-V3 for training; and processing the SAR two-dimensional image and the optical image to be detected, inputting the processed SAR two-dimensional image and the optical image into the optimized YOLO-V3 model, detecting whether the damage exists or not, and marking the damage on the optical image if the damage exists. The intelligent imaging online detection method for the damage of the radar wave-absorbing coating/electromagnetic shielding film based on deep learning utilizes the deep learning technology to obtain the detection model, is convenient for the operation of workers, reduces the dependence on professionals, reduces the cost and improves the efficiency.

Description

Radar wave-absorbing coating/electromagnetic shielding film damage intelligent imaging online detection method based on deep learning
Technical Field
The invention belongs to the technical field of damage detection of radar wave-absorbing coatings and electromagnetic shielding materials, and relates to an intelligent imaging online detection method for damage of radar wave-absorbing coatings/electromagnetic shielding films based on deep learning.
Background
The radar absorbing coating is a coating type coating with an electromagnetic wave absorbing function, and the electromagnetic shielding film has an electromagnetic wave shielding capability. The radar wave-absorbing coating and the electromagnetic shielding film have the characteristics of simple construction, good wave-absorbing performance or electromagnetic shielding performance, no limitation of weapon shapes and the like, are widely applied to stealth design of weaponry such as airplanes, missiles, naval vessels, tank armored vehicles and the like, but physical and chemical damage changes such as local shedding, corrosion oxidation and the like can occur due to factors such as collision, scratch, natural aging and the like in the service and training processes of the equipment, so that the wave-absorbing capacity of the coating or the electromagnetic shielding capacity of the film is reduced or lost, and the stealth performance of the equipment is seriously influenced. Therefore, the method has very important significance for the online detection and research of the damage of the radar wave-absorbing coating and the electromagnetic shielding film.
At present, the online detection of the damage to the radar wave-absorbing coating and the electromagnetic shielding film is mainly a visual method, an electromagnetic parameter detection method and the like. The absorption attenuation effect of the wave-absorbing coating cannot be directly obtained by a visual method, and the electromagnetic parameter detection method needs to prepare a standard sample, is not suitable for special structures such as curved surfaces and the like, and is not suitable for online detection. At present, a convolutional neural network is applied to online nondestructive detection of a coating damage defect, most of researches are based on an optical image of an object, the optical image can only reflect damage physical dimensions (depth, area and the like), local scattering characteristic changes caused by damage of a coating or a film are related to the position and specific shape of damage, and changes of local stealth performance cannot be obtained only depending on the damage physical dimensions, so that changes of scattering characteristics are obtained by adopting a local two-dimensional scattering imaging mode, the method is a key for online detection of damage of a radar wave-absorbing coating and an electromagnetic shielding film, but the detection based on the two-dimensional scattering imaging has high dependence on professionals, the problems of high detection cost, low efficiency and the like are caused, erroneous judgment and missing judgment are easily caused by human factors, and the detection accuracy is reduced.
Disclosure of Invention
In order to achieve the purpose, the invention provides an intelligent imaging online detection method for radar wave-absorbing coating/electromagnetic shielding thin film damage based on deep learning, a detection model is obtained by utilizing a deep learning technology, the operation of workers is facilitated, the dependence on professionals is reduced, the cost is reduced, the efficiency is improved, and the problems of limited damage type detection, complex method, higher dependence on professionals, high detection cost, low efficiency and poor detection accuracy in the prior art are solved.
The invention adopts the technical scheme that the intelligent imaging online detection method of the radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning comprises the following steps:
s10, measuring the wave-absorbing characteristics of the radar wave-absorbing coating/electromagnetic shielding film by adopting a relative method, respectively obtaining SAR two-dimensional images of the damaged and undamaged radar wave-absorbing coating/electromagnetic shielding film, and collecting corresponding optical images;
s20, judging the damage position and shape of the optical image of the damaged radar wave-absorbing coating/electromagnetic shielding film according to the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10 and the change condition of the corresponding optical image, and using a rectangular frame to connect the optical image of the damaged radar wave-absorbing coating/electromagnetic shielding film with the optical imageThe damage is framed out and marked in the format of [ x ]min,ymin,xmax,ymax]Wherein x ismin,yminIs the coordinate of the upper left corner of the rectangular box, xmax,ymaxCoordinates of the lower right corner of the rectangular frame;
s30, carrying out normalization processing on the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the S10;
s40, simultaneously performing data enhancement on the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film and the corresponding optical image obtained in the step S10 to expand data samples, and dividing the quantity of the expanded data samples into a training set and a test set according to the proportion of 8: 2;
s50, inputting the training set obtained in the step S40 into a convolutional neural network YOLO-V3 for training, inputting a test set into the trained convolutional neural network YOLO-V3 in the training process, monitoring the detection precision of the test set on the convolutional neural network YOLO-V3 in real time, and optimizing the convolutional neural network YOLO-V3 by adjusting the hyper-parameter of the convolutional neural network YOLO-V3 to obtain an optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film;
s60, acquiring an SAR two-dimensional image and an optical image of the radar wave-absorbing coating/electromagnetic shielding film to be detected by adopting the mode in the step S10, processing the SAR two-dimensional image and the optical image in the step S30, inputting the SAR two-dimensional image and the optical image into the optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S50, detecting whether the radar wave-absorbing coating/electromagnetic shielding film contains damages or not, if the radar wave-absorbing coating/electromagnetic shielding film contains the damages, obtaining the position coordinates of the damages, and marking the damages on the optical image of the radar wave-absorbing coating/electromagnetic shielding.
Further, step S10 specifically includes the following steps:
step S11: erecting a transmitting antenna and a receiving antenna in a full-shielding wave-absorbing darkroom, setting the distance between the transmitting antenna and the receiving antenna and a target distance area, wherein the polarization mode is VV polarization, the transmitting antenna and the receiving antenna are fixed on a slide rail, and the working frequency of a vector network is set to be 8 GHz-12 GHz;
step S12: placing a foam bracket in a target area, setting a total scanning range and a measurement interval, scanning a transmitting-receiving antenna on a slide rail from left to right according to a sequence of walking-stopping-walking-stopping, and storing a measurement result as a background level data file;
step S13: placing a calibration body on the foam bracket, scanning the transceiving antenna from left to right on the slide rail according to the sequence of walking-stopping-walking-stopping according to the total scanning range and the measurement distance set in the step S12, and storing the measurement result as a calibration body echo data file;
step S14: placing a radar absorbing coating/electromagnetic shielding film on the foam support, scanning the transceiving antenna on the slide rail from left to right according to the walking-stopping-walking-stopping sequence according to the total scanning range and the measurement interval set in the step S12, and storing the measurement result as a target echo data file;
step S15: processing the test result data obtained in the steps S12, S13 and S14 on a computer to obtain an SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film;
step S16: optical images of the radar absorbing coating/electromagnetic shielding film are acquired from a fixed angle and distance using an industrial camera.
Further, step S30 specifically includes the following steps: respectively calculating the mean value mu and the variance sigma of the pixel values of the RGB three channels of the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10; then, the brightness and contrast of the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10 are normalized and adjusted by adopting the following formula:
Figure BDA0002778289960000031
in the formula, f (x, y) represents the image pixel after adjustment, g (x, y) represents the image pixel before adjustment, and (x, y) represents the coordinate position of the pixel point.
Further, in step S40, the data enhancement includes: random rotation, random clipping, random scaling, mosaic, or any one or more of them.
Further, step S50 specifically includes the following steps:
inputting the SAR two-dimensional image of the radar absorbing coating/electromagnetic shielding film of the training set obtained in the step S40 into a convolutional neural network YOLO-V3, and outputting to obtain a prediction result of the convolutional neural network YOLO-V3, namely [ x [ ]min,ymin,xmax,ymax]PredictionReal mark [ x ] of the optical image corresponding to the SAR two-dimensional imagemin,ymin,xmax,ymax]And comparing, calculating a loss function, calculating gradient of the loss function, then correcting the model to reduce a loss function value, carrying out at least 3000 times of training, inputting the test set into the trained convolutional neural network YOLO-V3 after every 10 times of training is finished, monitoring the detection precision of the test set on the convolutional neural network YOLO-V3 in real time, and obtaining the optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film by adjusting the hyper-parameter.
Further, in step S50, the initial learning rate of the convolutional neural network YOLO-V3 is 0.0001, and the learning rate is varied using the following method, as shown in the following equation:
Figure BDA0002778289960000041
in the formula etatLearning rate, η, representing the current training roundmaxRepresents the maximum value of the learning rate, ηminDenotes the minimum value of the learning rate, TcurRepresenting the number of training rounds currently performed and T representing the total training round.
Further, the loss function L is shown as follows:
Figure BDA0002778289960000042
in the formula, S2The number of grids generated for the convolutional neural network, B is per grid in the convolutional neural networkThe number of candidate frames generated by centering the center of each grid,
Figure BDA0002778289960000043
representing that the jth candidate frame of the ith grid contains damage;
Figure BDA0002778289960000044
an abscissa representing a center point of the output prediction box;
Figure BDA0002778289960000045
an abscissa representing a center point of a real frame of the input label;
Figure BDA0002778289960000046
a vertical coordinate representing a center point of the output prediction frame;
Figure BDA0002778289960000047
a vertical coordinate representing a center point of a real frame of the input label;
Figure BDA0002778289960000048
represents the length of the output prediction box;
Figure BDA0002778289960000049
the length of the real box representing the input label,
Figure BDA00027782899600000410
width of the real box representing the input label;
Figure BDA00027782899600000411
representing the width of the output prediction box.
Further, the model is modified such that the loss function value is reduced using the following equation:
Figure BDA00027782899600000412
in the formula, wi,t+1Representing the th of the network in the t +1 th iterationi weight parameters; w is ai,tAn ith weight parameter representing the network in the t iteration; alpha is the learning rate; bi,t+1Represents the ith bias parameter of the network in the t +1 th iteration; bi,tRepresenting the ith bias parameter of the network in the t iteration; t is the number of iterations; beta is a1,β2Are all exponentially weighted parameters;
Figure BDA0002778289960000051
is represented by beta1To the power of t of (a),
Figure BDA0002778289960000052
is represented by beta2To the t power;
Figure BDA0002778289960000053
are all intermediate variables; w is ai,t-1The ith weight parameter representing the network in the t-1 th iteration; l represents a loss function; w is aiRepresents the ith weight parameter; biRepresents the ith bias parameter; ε is a small quantity to prevent denominator from being zero, and is taken as 1 × 10-9
The invention has the beneficial effects that:
(1) compared with the existing radar wave-absorbing coating/electromagnetic shielding film damage detection technology, the method has the advantages that the detection model is obtained by utilizing the deep learning technology, the operation of workers is facilitated, the dependence on professionals is reduced, the cost is reduced, and the efficiency is improved;
(2) compared with the prior art for detecting the object defects by utilizing the convolutional neural network, the method not only can detect physical damages such as falling, edge breakage and the like of the radar wave-absorbing coating/electromagnetic shielding film, but also can detect chemical damages such as corrosion oxidation and the like, improves the detection accuracy, and is favorable for maintaining the radar wave-absorbing coating/electromagnetic shielding film.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is an SAR two-dimensional image and an optical image of the radar absorbing coating/electromagnetic shielding film with physical scale damage of the present invention; in fig. 1, a is an SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film with physical scale damage, and b in fig. 1 is an optical image of the radar wave-absorbing coating/electromagnetic shielding film with physical scale damage.
FIG. 2 is an SAR two-dimensional image and an optical image of the radar absorbing coating/electromagnetic shielding film with corrosion oxidation damage of the present invention; in fig. 2, a is an SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film with corrosion oxidation damage, and b in fig. 2 is an optical image of the radar wave-absorbing coating/electromagnetic shielding film with corrosion oxidation damage.
FIG. 3 is a schematic flow chart of the intelligent imaging online detection method of radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning.
FIG. 4 is a schematic diagram of a picture acquisition process of the intelligent imaging online detection method for radar wave-absorbing coating/electromagnetic shielding thin film damage based on deep learning.
FIG. 5 is a schematic diagram of the detection process of the convolutional neural network YOLO-V3 according to the present invention.
Fig. 6 is a schematic diagram of the output of the prediction image of the convolutional neural network YOLO-V3 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The intelligent imaging online detection method for radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning comprises the following steps:
step S10: measuring the wave-absorbing characteristics of the radar wave-absorbing coating/electromagnetic shielding film by adopting a relative method, obtaining an SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film, and collecting an optical image corresponding to the SAR two-dimensional image, wherein the SAR two-dimensional image comprises a damaged radar wave-absorbing coating/electromagnetic shielding film and a non-damaged radar wave-absorbing coating/electromagnetic shielding film; the specific process is as follows:
step S11: a transmitting antenna and a receiving antenna are erected in a full-shielding wave-absorbing darkroom, the distance between the transmitting antenna and the receiving antenna is 145mm, the distance between the transmitting antenna and the receiving antenna is 1300mm from a target area, the polarization mode is VV polarization, the transmitting antenna and the receiving antenna are fixed on a slide rail with the length of 1002mm, and the operating frequency of a vector network is set to be 8 GHz-12 GHz.
Step S12: placing a foam bracket in a target area, scanning a transmitting-receiving antenna on a slide rail from left to right according to a sequence of 'walking-stopping-walking-stopping', wherein the total scanning range is 1002mm, the measurement interval is 6mm, and the measurement result is stored as a background level data file;
step S13: placing a calibration body on a foam bracket, scanning a transmitting-receiving antenna on a slide rail from left to right according to a sequence of 'walking-stopping-walking-stopping', wherein the total scanning range is 1002mm, the measurement interval is 6mm, and the measurement result is stored as a calibration body echo data file;
step S14: placing a radar absorbing coating/an electromagnetic shielding film on a foam support, scanning a transmitting-receiving antenna on a slide rail from left to right according to a walking-stopping-walking-stopping sequence, wherein the total scanning range is 1002mm, the measurement interval is 6mm, and the measurement result is stored as a target echo data file;
step S15: and (4) further processing the data obtained in the steps S12, S13 and S14 on an engineering computer to obtain an SAR two-dimensional image of the radar absorbing coating/electromagnetic shielding film.
Step S16: optical images of the radar absorbing coating/electromagnetic shielding film are acquired from a fixed angle and distance using an industrial camera.
In the embodiment, 3000 SAR two-dimensional images and 3000 optical images of the radar wave-absorbing coating/electromagnetic shielding film are obtained, wherein 2400 images with damage and 600 images without damage are obtained; wherein, the damage mainly includes: physical scale damage and corrosive oxidation damage; physical scale damage is shown as cracking and the like, and an optical image and an SAR two-dimensional image are shown in figure 1; the corrosion oxidation damage is shown as corrosion bubbling, and the optical image and SAR two-dimensional image are shown in FIG. 2.
Step S20: according to the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10 and the change condition of the corresponding optical image, judging the damage position and shape of the optical image of the damaged radar wave-absorbing coating/electromagnetic shielding film, framing out the damage by using a rectangular frame, and marking the damage in a [ x ] formatmin,ymin,xmax,ymax]Wherein x ismin,yminIs the coordinate of the upper left corner of the rectangular box, xmax,ymaxIs the coordinate of the lower right corner of the rectangular frame.
Step S30: normalizing the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10: calculating the mean value mu and the variance sigma of the pixel values of the three channels of RGB of the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10, setting g (x, y) to represent the image pixel before adjustment, and f (x, y) to represent the image pixel after adjustment, and performing normalization adjustment on the brightness and the contrast of the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10 by adopting f (x, y):
Figure BDA0002778289960000071
the image pixel value distribution is approximated to the standard normal distribution, where x and y represent the coordinate positions of the pixels, and the mean value μ is [0.479, 0.448, 0.416 ] in this embodiment]The variance σ is [0.223, 0.226, 0.228 ]]。
The normalization processing has the advantages of reducing the training difficulty of the model, improving the generalization capability of the model and preventing the occurrence of gradient explosion.
Step S40: meanwhile, data enhancement is carried out on the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10 and the corresponding optical image to expand data samples, and the quantity of the expanded data samples is divided into a training set and a test set according to the proportion of 8: 2; in the embodiment, the data enhancement is performed on the samples by using random rotation, random clipping, random scaling and mosaic data enhancement.
Step S50: inputting the training set obtained in the step S40 into a convolutional neural network YOLO-V3 for training, inputting a test set into the trained convolutional neural network YOLO-V3 in the training process, and adjusting the hyper-parameters (including eta) of the convolutional neural network YOLO-V3maxAnd ηmin) Monitoring the detection precision of a test set on a convolutional neural network YOLO-V3 in real time, and optimizing the convolutional neural network YOLO-V3 to obtain an optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film;
the initial learning rate of the original convolutional neural network YOLO-V3 model was 0.0001, the learning rate was varied using the "step" method, and the following method was used in this example, and the model was defined as equation (1)
Figure BDA0002778289960000081
Wherein eta istLearning rate, η, representing the current training roundmaxAnd ηminRespectively representing the maximum and minimum values of the learning rate, defining a range of learning rates, TcurIndicating how many training rounds are currently performed and T indicates the total training round.
The training of the convolutional neural network model YOLO-V3 specifically comprises the following steps:
inputting the SAR two-dimensional image of the radar absorbing coating/electromagnetic shielding film in the training set, which mainly comprises scattering characteristics, into a convolutional neural network YOLO-V3, and outputting to obtain a prediction result of the convolutional neural network YOLO-V3, namely [ x [ ]min,ymin,xmax,ymax]PredictionReal mark [ x ] of the optical image corresponding to the SAR two-dimensional imagemin,ymin,xmax,ymax]And comparing, solving a loss function, and then performing back propagation, wherein the specific steps are as follows: graduating the loss function, then modifying the model to gradually reduce the loss function, repeating the steps for at least 3000 timesAnd (3) performing secondary training, inputting the test set into the convolutional neural network YOLO-V3 after every 10 rounds of training are completed, monitoring the detection precision of the test set on the convolutional neural network YOLO-V3 in real time, preventing the model from being over-fitted, and obtaining the optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film by adjusting the hyper-parameter.
The convolutional neural network YOLO-V3 is a third version of the real-time target detection algorithm YOLO, and its basic framework is the first 52 layers (without pooling layer and full connection layer) of the feature extraction network DarkNet-53, using convolution with step size of 2 to perform downsampling, and DarkNet-53 undergoing 5 downsampling, while using upsampling, route operation in the network, and also performing 3 detections in the network structure.
In the training process of the convolutional neural network YOLO-V3 model, a leakage ReLU activation function is adopted to prevent the gradient from disappearing and accelerate the network training. The function can improve the accuracy without increasing the cost additionally, and can well transfer the gradient to the previous network layer while carrying out back propagation, thereby preventing the problem of gradient disappearance and accelerating the network training.
The leak ReLU activation function is defined as:
y=max(ax,x),a∈(0,1) (2);
where x is the weighted sum of the individual neurons.
The loss function is used to represent the difference between the predicted result and the true mark, see definitional equation (3)
Figure BDA0002778289960000082
Wherein S is2The number of grids generated for the convolutional neural network, B is the number of candidate frames generated in the convolutional neural network centering on the center of each grid,
Figure BDA0002778289960000091
representing that the jth candidate frame of the ith grid contains damage;
Figure BDA0002778289960000092
an abscissa representing a center point of the output prediction box;
Figure BDA0002778289960000093
an abscissa representing a center point of a real frame of the input label;
Figure BDA0002778289960000094
a vertical coordinate representing a center point of the output prediction frame;
Figure BDA0002778289960000095
a vertical coordinate representing a center point of a real frame of the input label;
Figure BDA0002778289960000096
represents the length of the output prediction box;
Figure BDA0002778289960000097
the length of the real box representing the input label,
Figure BDA0002778289960000098
width of the real box representing the input label;
Figure BDA0002778289960000099
representing the width of the output prediction box.
The loss function is parameterized by w and b. The purpose of the network training is to find the values of w and b that minimize the loss function L; in the invention, w and b are updated, and the updating mode is shown in a formula (4);
Figure BDA00027782899600000910
wherein, in the formula, wi,t+1Represents the ith weight parameter of the network in the t +1 th iteration; w is ai,tAn ith weight parameter representing the network in the t iteration; alpha is the learning rate; bi,t+1I-th representing the network in the t +1 th iterationA bias parameter; bi,tRepresenting the ith bias parameter of the network in the t iteration; t is the number of iterations; beta is a1,β2Are all exponentially weighted parameters, and the embodiment of the invention takes beta1=0.9,β2=0.999;
Figure BDA00027782899600000911
Is represented by beta1To the power of t of (a),
Figure BDA00027782899600000912
is represented by beta2To the t power;
Figure BDA00027782899600000913
Figure BDA00027782899600000914
are all intermediate variables; w is ai,t-1The ith weight parameter representing the network in the t-1 th iteration; l represents a loss function; w is aiRepresents the ith weight parameter; biRepresents the ith bias parameter; ε is a small quantity to prevent denominator from being zero, and is taken as 1 × 10-9
Step S60: and S30, processing the SAR two-dimensional image and the optical image of the radar wave-absorbing coating/electromagnetic shielding film to be detected in the S10 mode, inputting the SAR two-dimensional image and the optical image into the optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film obtained in the S50, detecting whether the radar wave-absorbing coating/electromagnetic shielding film contains the damages or not, if the radar wave-absorbing coating/electromagnetic shielding film contains the damages, obtaining the position coordinates of the damages, and marking the damages on the optical image of the radar wave-absorbing coating/electromagnetic shielding film to enable the damages to be visualized on the optical image.
The invention puts the data set as 8: the method 2 includes dividing the model into a training set and a testing set, wherein the training set and the testing set are provided with real labels (namely labels marked by manual judgment), the training set is used for training the model, the testing set is used for detecting the detection effect of the model, the testing set is used for judging damage judged by an output item, an evaluation index is determined according to the SAR two-dimensional image, when the scattering characteristic of a certain area is obviously changed (namely the reflectivity is larger), the damage is considered, and when the scattering characteristic is not obviously changed (namely the reflectivity is smaller), the damage is not judged. The IOU method is adopted to measure the precision of predicting the damage boundary and the real damage condition, namely the intersection (the part with poor overlap) of the two is divided by the union of the two, and the IOU value is more accurate when being closer to 1.
The type of the damage is judged according to the difference of scattering characteristics, the model learns that SAR two-dimensional images corresponding to different types of damages have different scattering characteristics in the training process, and the model extracts the characteristics and then judges. The predicted image is output as an optical image as shown in fig. 6.
The scattering characteristic of the damage and the optical image have a certain corresponding relation, but the relation is an empirical relation, and the relation is represented by that the reflectivity is larger at the damaged position, the scattering characteristic is obvious, the difference of the type, the shape, the depth and the position of the damage can influence the difference of the shape, the area and the position of the damaged area represented on the SAR two-dimensional image, but the damage needs to be judged by depending on a professional with mature experience through the representation of the SAR two-dimensional image, so that the requirements on the knowledge and the experience of the professional are higher, the efficiency is low, and the problems of wrong judgment and missed judgment are easy to occur. The deep learning model is obtained through training, so that the model learning obtains the knowledge and experience of professional persons, the capability of detecting and identifying the damage is obtained, and the detection efficiency and the detection precision are greatly improved.
It is noted that, in the present application, relational terms such as first, second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (8)

1. The intelligent imaging online detection method for radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning is characterized by comprising the following steps of:
s10, measuring the wave-absorbing characteristics of the radar wave-absorbing coating/electromagnetic shielding film by adopting a relative method, respectively obtaining SAR two-dimensional images of the damaged and undamaged radar wave-absorbing coating/electromagnetic shielding film, and collecting corresponding optical images;
s20, judging the damage position and shape of the optical image of the damaged radar wave-absorbing coating/electromagnetic shielding film according to the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10 and the change condition of the corresponding optical image, framing out the damage by a rectangular frame, and marking the damage in a format of [ x [ ]min,ymin,xmax,ymax]Wherein x ismin,yminIs the coordinate of the upper left corner of the rectangular box, xmax,ymaxCoordinates of the lower right corner of the rectangular frame;
s30, carrying out normalization processing on the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the S10;
s40, simultaneously performing data enhancement on the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film and the corresponding optical image obtained in the step S10 to expand data samples, and dividing the quantity of the expanded data samples into a training set and a test set according to the proportion of 8: 2;
s50, inputting the training set obtained in the step S40 into a convolutional neural network YOLO-V3 for training, inputting a test set into the trained convolutional neural network YOLO-V3 in the training process, monitoring the detection precision of the test set on the convolutional neural network YOLO-V3 in real time, and optimizing the convolutional neural network YOLO-V3 by adjusting the hyper-parameter of the convolutional neural network YOLO-V3 to obtain an optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film;
and S60, acquiring an SAR two-dimensional image and an optical image of the radar wave-absorbing coating/electromagnetic shielding film to be detected by adopting the mode in the step S10, processing the SAR two-dimensional image and the optical image in the step S30, inputting the SAR two-dimensional image and the optical image into the optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S50, detecting whether the radar wave-absorbing coating/electromagnetic shielding film contains damages or not, if the radar wave-absorbing coating/electromagnetic shielding film contains the damages, obtaining the position coordinates of the damages, and marking the damages on the optical image of the radar wave-absorbing.
2. The intelligent imaging online detection method for the damage of the radar wave-absorbing coating/electromagnetic shielding film based on deep learning of claim 1, wherein the step S10 specifically comprises the following steps:
step S11: erecting a transmitting antenna and a receiving antenna in a full-shielding wave-absorbing darkroom, setting the distance between the transmitting antenna and the receiving antenna and a target distance area, wherein the polarization mode is VV polarization, the transmitting antenna and the receiving antenna are fixed on a slide rail, and the working frequency of a vector network is set to be 8 GHz-12 GHz;
step S12: placing a foam bracket in a target area, setting a total scanning range and a measurement interval, scanning a transmitting-receiving antenna on a slide rail from left to right according to a sequence of walking-stopping-walking-stopping, and storing a measurement result as a background level data file;
step S13: placing a calibration body on the foam bracket, scanning the transceiving antenna from left to right on the slide rail according to the sequence of walking-stopping-walking-stopping according to the total scanning range and the measurement distance set in the step S12, and storing the measurement result as a calibration body echo data file;
step S14: placing a radar absorbing coating/electromagnetic shielding film on the foam support, scanning the transceiving antenna on the slide rail from left to right according to the walking-stopping-walking-stopping sequence according to the total scanning range and the measurement interval set in the step S12, and storing the measurement result as a target echo data file;
step S15: processing the test result data obtained in the steps S12, S13 and S14 on a computer to obtain an SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film;
step S16: optical images of the radar absorbing coating/electromagnetic shielding film are acquired from a fixed angle and distance using an industrial camera.
3. The intelligent imaging online detection method for radar absorbing coating/electromagnetic shielding thin film damage based on deep learning of claim 1, wherein the step S30 specifically comprises the following steps: respectively calculating the mean value mu and the variance sigma of the pixel values of the RGB three channels of the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10; then, the brightness and contrast of the SAR two-dimensional image of the radar wave-absorbing coating/electromagnetic shielding film obtained in the step S10 are normalized and adjusted by adopting the following formula:
Figure FDA0002778289950000021
in the formula, f (x, y) represents the image pixel after adjustment, g (x, y) represents the image pixel before adjustment, and (x, y) represents the coordinate position of the pixel point.
4. The intelligent imaging online detection method for the damage of the radar wave absorbing coating/electromagnetic shielding film based on deep learning of claim 1, wherein in the step S40, the data enhancement comprises: random rotation, random clipping, random scaling, mosaic, or any one or more of them.
5. The intelligent imaging online detection method for radar absorbing coating/electromagnetic shielding thin film damage based on deep learning of claim 1, wherein the step S50 specifically comprises the following steps:
inputting the SAR two-dimensional image of the radar absorbing coating/electromagnetic shielding film of the training set obtained in the step S40 into a convolutional neural network YOLO-V3, and outputting to obtain a prediction result of the convolutional neural network YOLO-V3, namely [ x [ ]min,ymin,xmax,ymax]PredictionReal mark [ x ] of the optical image corresponding to the SAR two-dimensional imagemin,ymin,xmax,ymax]And comparing, calculating a loss function, calculating gradient of the loss function, then correcting the model to reduce a loss function value, carrying out at least 3000 times of training, inputting the test set into the trained convolutional neural network YOLO-V3 after every 10 times of training is finished, monitoring the detection precision of the test set on the convolutional neural network YOLO-V3 in real time, and obtaining the optimized convolutional neural network YOLO-V3 model for detecting the internal and external damages of the radar wave-absorbing coating/electromagnetic shielding film by adjusting the hyper-parameter.
6. The method for intelligently imaging and detecting the damage of the radar wave absorbing coating/electromagnetic shielding film based on the deep learning in the claim 5, wherein in the step S50, the initial learning rate of the convolutional neural network YOLO-V3 is 0.0001, and the learning rate is changed by using the following method, as shown in the following formula:
Figure FDA0002778289950000031
in the formula etatLearning rate, η, representing the current training roundmaxRepresents the maximum value of the learning rate, ηminDenotes the minimum value of the learning rate, TcurRepresenting the number of training rounds currently performed and T representing the total training round.
7. The intelligent imaging online detection method for radar wave-absorbing coating/electromagnetic shielding thin film damage based on deep learning of claim 5, wherein the loss function L is shown as follows:
Figure FDA0002778289950000032
in the formula, S2The number of grids generated for the convolutional neural network, B is the number of candidate frames generated in the convolutional neural network centering on the center of each grid,
Figure FDA0002778289950000033
representing that the jth candidate frame of the ith grid contains damage;
Figure FDA0002778289950000034
an abscissa representing a center point of the output prediction box;
Figure FDA0002778289950000035
an abscissa representing a center point of a real frame of the input label;
Figure FDA0002778289950000036
a vertical coordinate representing a center point of the output prediction frame;
Figure FDA0002778289950000037
a vertical coordinate representing a center point of a real frame of the input label;
Figure FDA0002778289950000038
represents the length of the output prediction box;
Figure FDA0002778289950000039
the length of the real box representing the input label,
Figure FDA00027782899500000310
width of the real box representing the input label;
Figure FDA00027782899500000311
indication inputThe width of the prediction box is shown.
8. The intelligent imaging online detection method for radar absorbing coating/electromagnetic shielding thin film damage based on deep learning of claim 5, wherein the method for modifying the model and reducing the loss function value adopts the following formula:
Figure FDA0002778289950000041
in the formula, wi,t+1Represents the ith weight parameter of the network in the t +1 th iteration; w is ai,tAn ith weight parameter representing the network in the t iteration; alpha is the learning rate; bi,t+1Represents the ith bias parameter of the network in the t +1 th iteration; bi,tRepresenting the ith bias parameter of the network in the t iteration; t is the number of iterations; beta is a1,β2Are all exponentially weighted parameters;
Figure FDA0002778289950000042
is represented by beta1To the power of t of (a),
Figure FDA0002778289950000043
is represented by beta2To the t power;
Figure FDA0002778289950000044
are all intermediate variables; w is ai,t-1The ith weight parameter representing the network in the t-1 th iteration; l represents a loss function; w is aiRepresents the ith weight parameter; biRepresents the ith bias parameter; ε is a small quantity to prevent denominator from being zero, and is taken as 1 × 10-9
CN202011273131.9A 2020-11-13 2020-11-13 Intelligent imaging on-line detection method for radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning Active CN112381792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011273131.9A CN112381792B (en) 2020-11-13 2020-11-13 Intelligent imaging on-line detection method for radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011273131.9A CN112381792B (en) 2020-11-13 2020-11-13 Intelligent imaging on-line detection method for radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning

Publications (2)

Publication Number Publication Date
CN112381792A true CN112381792A (en) 2021-02-19
CN112381792B CN112381792B (en) 2023-05-23

Family

ID=74583975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011273131.9A Active CN112381792B (en) 2020-11-13 2020-11-13 Intelligent imaging on-line detection method for radar wave-absorbing coating/electromagnetic shielding film damage based on deep learning

Country Status (1)

Country Link
CN (1) CN112381792B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202421499U (en) * 2011-12-30 2012-09-05 北京华航无线电测量研究所 Millimeter wave imaging device for omni-scanning of single antenna array
CN103926274A (en) * 2014-04-22 2014-07-16 哈尔滨工业大学 Infrared thermal wave radar imaging nondestructive testing method and system for defects of carbon fiber reinforced plastic (CFRP) plywood
CN104614726A (en) * 2015-03-05 2015-05-13 北京航空航天大学 Telescopic array type portable MIMO-SAR (multiple-input multiple-output synthetic aperture radar) measurement radar system and imaging method thereof
CN107563411A (en) * 2017-08-07 2018-01-09 西安电子科技大学 Online SAR target detection method based on deep learning
CN109427049A (en) * 2017-08-22 2019-03-05 成都飞机工业(集团)有限责任公司 A kind of detection method of holiday
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110164473A (en) * 2019-05-21 2019-08-23 江苏师范大学 A kind of chord arrangement detection method based on deep learning
CN110363820A (en) * 2019-06-28 2019-10-22 东南大学 It is a kind of based on the object detection method merged before laser radar, image
CN110473231A (en) * 2019-08-20 2019-11-19 南京航空航天大学 A kind of method for tracking target of the twin full convolutional network with anticipation formula study more new strategy
CN110910382A (en) * 2019-11-29 2020-03-24 添维信息科技(天津)有限公司 Container detection system
CN110992337A (en) * 2019-11-29 2020-04-10 添维信息科技(天津)有限公司 Container damage detection method and system
CN111223162A (en) * 2020-01-06 2020-06-02 华北电力大学(保定) Deep learning method and system for reconstructing EPAT image
CN111401225A (en) * 2020-03-13 2020-07-10 河海大学常州校区 Crowd abnormal behavior detection method based on improved logistic regression classification
CN111721834A (en) * 2020-06-22 2020-09-29 南京南瑞继保电气有限公司 Cable partial discharge online monitoring defect identification method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202421499U (en) * 2011-12-30 2012-09-05 北京华航无线电测量研究所 Millimeter wave imaging device for omni-scanning of single antenna array
CN103926274A (en) * 2014-04-22 2014-07-16 哈尔滨工业大学 Infrared thermal wave radar imaging nondestructive testing method and system for defects of carbon fiber reinforced plastic (CFRP) plywood
CN104614726A (en) * 2015-03-05 2015-05-13 北京航空航天大学 Telescopic array type portable MIMO-SAR (multiple-input multiple-output synthetic aperture radar) measurement radar system and imaging method thereof
CN107563411A (en) * 2017-08-07 2018-01-09 西安电子科技大学 Online SAR target detection method based on deep learning
CN109427049A (en) * 2017-08-22 2019-03-05 成都飞机工业(集团)有限责任公司 A kind of detection method of holiday
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110164473A (en) * 2019-05-21 2019-08-23 江苏师范大学 A kind of chord arrangement detection method based on deep learning
CN110363820A (en) * 2019-06-28 2019-10-22 东南大学 It is a kind of based on the object detection method merged before laser radar, image
CN110473231A (en) * 2019-08-20 2019-11-19 南京航空航天大学 A kind of method for tracking target of the twin full convolutional network with anticipation formula study more new strategy
CN110910382A (en) * 2019-11-29 2020-03-24 添维信息科技(天津)有限公司 Container detection system
CN110992337A (en) * 2019-11-29 2020-04-10 添维信息科技(天津)有限公司 Container damage detection method and system
CN111223162A (en) * 2020-01-06 2020-06-02 华北电力大学(保定) Deep learning method and system for reconstructing EPAT image
CN111401225A (en) * 2020-03-13 2020-07-10 河海大学常州校区 Crowd abnormal behavior detection method based on improved logistic regression classification
CN111721834A (en) * 2020-06-22 2020-09-29 南京南瑞继保电气有限公司 Cable partial discharge online monitoring defect identification method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIIRGEN KUNISCH等: "MEASUREMENT RESULTS AND MODELING ASPECTS FOR THE UWB RADIO CHANNEL", 《2002 IEEE CONFERENCE ON ULTRA WIDEBAND SYSTEMS AND TECHNOLOGIES》 *
刘玉桃: "面向全息化参数诊断的等离子体成像***基础研究", 《中国又休息硕士学位论文全文数据库 基础科学辑》 *
周仿荣;方明;马御棠;潘浩;: "基于YOLO v3的输电线路缺陷快速检测方法", 云南电力技术 *

Also Published As

Publication number Publication date
CN112381792B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
WO2021218424A1 (en) Rbf neural network-based method for sea surface wind speed inversion from marine radar image
CN110246112B (en) Laser scanning SLAM indoor three-dimensional point cloud quality evaluation method based on deep learning
CN102800096B (en) Robustness estimation algorithm of camera parameter
CN111680870B (en) Comprehensive evaluation method for quality of target motion trail
CN111598942A (en) Method and system for automatically positioning electric power facility instrument
CN111950396A (en) Instrument reading neural network identification method
CN106530271A (en) Infrared image significance detection method
CN116953653B (en) Networking echo extrapolation method based on multiband weather radar
CN112258495A (en) Building wood crack identification method based on convolutional neural network
CN116542912A (en) Flexible body bridge vibration detection model with multi-target visual tracking function and application
CN116645608A (en) Remote sensing target detection based on Yolox-Tiny biased feature fusion network
CN112699824B (en) Method and device for detecting constant of electric energy meter and storage medium
Zhang et al. Application of Swin-Unet for pointer detection and automatic calculation of readings in pointer-type meters
CN116433661B (en) Method, device, equipment and medium for detecting semiconductor wafer by multitasking
CN112818762A (en) Large-size composite material and rapid nondestructive testing method for sandwich structure thereof
CN115984360B (en) Method and system for calculating length of dry beach based on image processing
CN112381792A (en) Radar wave-absorbing coating/electromagnetic shielding film damage intelligent imaging online detection method based on deep learning
Xu et al. Intelligent corrosion detection and rating based on faster region-based convolutional neural network
CN115458088A (en) Impact positioning and energy detection method and system based on convolutional neural network
CN110321528A (en) A kind of Hyperspectral imaging heavy metal-polluted soil concentration evaluation method based on semi-supervised geographical space regression analysis
Ni et al. Multi-meter intelligent detection and recognition method under complex background
Bibikov et al. Detection and color correction of artifacts in digital images
CN115330705A (en) Skin paint surface defect detection method based on adaptive weighting template NCC
CN113963212A (en) Pipeline disease image classification method and device based on increment-Resnet neural network
CN111122813B (en) Water quality category evaluation method based on regional groundwater flow field direction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant