CN116883763B - Deep learning-based automobile part defect detection method and system - Google Patents

Deep learning-based automobile part defect detection method and system Download PDF

Info

Publication number
CN116883763B
CN116883763B CN202311142490.4A CN202311142490A CN116883763B CN 116883763 B CN116883763 B CN 116883763B CN 202311142490 A CN202311142490 A CN 202311142490A CN 116883763 B CN116883763 B CN 116883763B
Authority
CN
China
Prior art keywords
automobile part
automobile
data
defect detection
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311142490.4A
Other languages
Chinese (zh)
Other versions
CN116883763A (en
Inventor
吴治国
赵如意
彭志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningde Tianming New Energy Auto Parts Co ltd
Original Assignee
Ningde Tianming New Energy Auto Parts Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningde Tianming New Energy Auto Parts Co ltd filed Critical Ningde Tianming New Energy Auto Parts Co ltd
Priority to CN202311142490.4A priority Critical patent/CN116883763B/en
Publication of CN116883763A publication Critical patent/CN116883763A/en
Application granted granted Critical
Publication of CN116883763B publication Critical patent/CN116883763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an automobile part defect detection method and system based on deep learning. The invention relates to the technical field of automobile part defect detection, in particular to an automobile part defect detection method and system based on deep learning, wherein the method based on a convolutional neural network of a region candidate network is adopted for identifying the type of an automobile part, so that the automation degree and the overall accuracy of the system are improved; the method of combining the gating unit with the circulating neural network is adopted to position the automobile parts, so that the overall usability of the system is improved; the method of combining the deep neural network with the countermeasure training optimization model is adopted to classify the defects of the automobile parts, the defects of the automobile parts are classified into three categories, and the detection classification effectiveness and the detection depth of the whole system are improved.

Description

Deep learning-based automobile part defect detection method and system
Technical Field
The invention relates to the technical field of automobile part defect detection, in particular to an automobile part defect detection method and system based on deep learning.
Background
The automobile part defect detection based on deep learning is a method for automatically detecting whether the automobile part has defects or not by using a deep learning algorithm and technology, and can realize high-efficiency, accurate and automatic automobile part defect detection by applying the deep learning technology, thereby improving the production quality and the safety.
However, in the existing automobile part defect detection method, the technical problems that parts are found from real image data according to experience by manpower and classified, the efficiency is low, and false alarm and omission are easy to occur due to the influence of human factors such as pressure, eyesight and the like of related technicians; in the existing defect detection method for the automobile parts, the technical problems that the automatic detection is often influenced by the complexity of the automobile structure, weather conditions and illumination change factors, the positioning accuracy of the automobile parts is low, and the subsequent defect detection and classification are negatively influenced exist; in the existing automobile part defect detection method, the technical problems that the types which can be detected by automatic defect detection are fewer and the detection requirement of the automobile part defect cannot be met effectively exist.
Disclosure of Invention
Aiming at the problems that in the prior art, the defects of the automobile parts are detected and classified from real image data according to experience by manpower, the efficiency is low, false alarm and omission are easy to occur due to the influence of pressure, eyesight and other human factors of related technicians, the method based on a convolutional neural network of a regional candidate network is creatively adopted to identify the types of the automobile parts, the types of the automobile parts are automatically identified, and the degree of automation and the overall accuracy of the system are improved; aiming at the technical problems that in the existing automobile part defect detection method, automatic detection is often influenced by the complexity of an automobile structure, weather conditions and illumination change factors, the positioning accuracy of automobile parts is low, and the follow-up defect detection and classification can be negatively influenced, the method of combining a gating unit with a circulating neural network is creatively adopted in the scheme to perform context information feature fusion on automobile part defect detection data, perform automobile part position positioning, more intuitively detect the position defects of the automobile parts, and improve the overall usability of the system; aiming at the technical problems that the existing automobile part defect detection method has fewer types which can be detected by automatic defect detection and can not effectively meet the detection requirement of automobile part defects, the method creatively adopts a deep neural network combined with an countermeasure training optimization model to classify the automobile part defects into three categories of part dislocation, part deletion and part damage, meets the detection requirement concerned by the automobile part defect detection, and further improves the detection classification effectiveness and the detection depth of the whole system.
The technical scheme adopted by the invention is as follows: the invention provides a method for detecting defects of automobile parts based on deep learning, which comprises the following steps:
step S1: obtaining data;
step S2: preprocessing data;
step S3: identifying the type of the automobile parts;
step S4: positioning the positions of automobile parts;
step S5: classifying defects of automobile parts;
step S6: and detecting defects of automobile parts.
Further, in step S1, the data acquisition specifically refers to acquiring an image of a key part of the automobile from the multi-view image of the automobile, so as to obtain an original data set I for detecting defects of the automobile parts O
Further, in step S2, the data preprocessing specifically refers to the detection of the defects of the automobile partsInitial dataset I O The data in the process are subjected to denoising, oversampling, data splicing and normalization operation to obtain an enhanced data set I for detecting the defects of the automobile parts S
Further, in step S3, the type of the automobile part is identified, specifically, a convolutional neural network method based on a region candidate network is adopted to detect the defect of the automobile part and enhance the data set I S The method for identifying the type of the automobile part by the enhanced image data comprises the following steps:
Step S31: constructing a basic convolution network layer, specifically adopting a pre-trained ResNet network as a basic network, and carrying out defect detection enhancement data set I on the automobile parts S Is used as part image input data I to be identified Input And extracting the part image input data I to be identified through the basic network Input Is characterized by obtaining a feature image I of the part to be identified F
Step S32: the method comprises the steps of constructing a region candidate network layer, specifically connecting the region candidate network after the basic network, obtaining a candidate region through anchor block generating operation, wherein the anchor block generating operation comprises the following steps:
step S321: window sliding operation, in particular according to the feature image I of the part to be identified F In the window sliding operation, the calculation formula of the window center coordinate is:
wherein x is i′ Is the window center horizontal coordinate, y i′ Is the window center vertical coordinate, s is the window sliding stride parameter, i' is the window index, mod is the modulo operator, w is the window width, and// is the rounding divide operator;
step S322: calculating the anchor frame width w k The calculation formula is as follows:
wherein w is k Is the anchor frame width, r k Is the anchor frame aspect ratio, s l Is a scale parameter;
step S323: calculating the anchor point frame height h k The calculation formula is as follows:
in the formula, h k Is the anchor frame height, r k Is the anchor frame aspect ratio, s l Is a scale parameter;
step S324: calculating anchor point frame coordinates Anchor Box, wherein a calculation formula is as follows:
wherein Anchor box is anchor frame coordinates, x i′ Is the horizontal coordinate of the center of the window, y i′ Is the vertical coordinate of the center of the window, w k Is the anchor frame width, h k Is the anchor point frame height;
step S325: generating an anchor frame, in particular by calculating the width w of the anchor frame k The anchor point frame height h k Generating an anchor block by the anchor block coordinate Anchor Box, and obtaining a candidate region by generating the anchor block;
step S33: constructing a region classification network, specifically constructing a region of interest pool, and normalizing the size of the candidate region through the region of interest pool to obtain a fixed specification feature map I ROI The step of constructing the region of interest pool comprises the following steps:
step S331: calculating the size of the pooled region of the region of interest pool, wherein the calculation formula is as follows:
in which W is IN Is the candidate region width, H IN Is the candidate area height, W OUT Is the width of the pooling area, H OUT Is the pooling region height, round () is the rounding operator, and p is the pooling parameter;
Step S332: calculating the position of the pooled region of the region of interest pool, wherein the calculation formula is as follows:
wherein x is IN Is the horizontal coordinate of the candidate region, y IN Is the vertical coordinate of the candidate region, x OUT Is the horizontal coordinate of the pooled region, y OUT Is the pool area vertical coordinate, round () is the rounding operator, p is the pool argument;
step S34: constructing a full connection layer, in particular to the fixed specification characteristic diagram I ROI Using the full connection layer to conduct category prediction to obtain the initial category of part identification;
step S35: the component identification screening is specifically that the component identification initial category is screened through a non-maximum suppression algorithm, and the calculation formula of the non-maximum suppression algorithm is as follows:
wherein IoU is the overlap of the candidate regions calculated by the non-maximum suppression algorithm, area (Box 1 ∩Box 2 ) Is the intersection Area of the 1 st candidate Area and the 2 nd candidate Area selected by the non-maximum suppression algorithm, area (Box 1 ∪Box 2 ) Is the total union area of the 1 st candidate area and the 2 nd candidate area selected by the non-maximum suppression algorithm;
step S36: training the automobile part type recognition model, specifically through the construction basic convolution network layer, the construction area candidate network layer and the construction area classification network And performing Model training on the automobile part type recognition Model by constructing the full connection layer and performing part recognition screening to obtain an automobile part type recognition Model API
Step S37: identification of the type of a vehicle component, in particular using said Model for identifying the type of a vehicle component API Enhancement data set I for defect detection of automobile parts S The enhanced image data in the model are subjected to automobile part type identification and classification to obtain automobile part type identification data D IC
Further, in step S4, the positioning of the position of the automobile part, specifically, the method of combining the gating unit with the recurrent neural network is adopted to detect the defect of the automobile part and enhance the data set I S The method comprises the following steps of:
step S41: the update gate is constructed, and the calculation formula is as follows:
Update t =σ(θ U [m t ,h t-1 ]+b U );
in Update t Is the output of the positioning information of the update gate at time t, sigma () is an S-shaped function, theta U Is to update the learning weight of the door, m t Is the defect detection enhancement data set I of the automobile parts S Positioning context information, h, of enhanced image data in (a) at time t t-1 Is the enhanced image data feature hidden state at the last time t-1, b U Updating the gate bias parameter;
step S42: the reset gate is constructed, and the calculation formula is as follows:
Reset t =σ(θ R [m t ,h t-1 ]+b R );
in the formula, reset t Is the positioning information output of the reset gate at time t, sigma () is an S-shaped function, theta R Is to reset the learning weight of the door, m t Is the defect detection enhancement data set I of the automobile parts S Positioning context information, h, of enhanced image data in (a) at time t t-1 Is the increment of the last time t-1Strong image data feature hidden state, b R Is a reset gate bias parameter;
step S43: calculating the memory state of the gating unit, wherein the calculation formula is as follows:
Cell t =tanh(θ C [m t ,(Reset t ⊙h t-1 )]+b C );
in the formula, cell t Is the gating cell memory state at time t, reset t Is the positioning information output of the reset gate at time t, tanh () is the hyperbolic tangent function, θ C Is the memory state weight, m t Is the defect detection enhancement data set I of the automobile parts S The positioning context information of the enhanced image data in (a) at time t, +. t-1 Is the enhanced image data feature hidden state at the last time t-1, b C Is a memory state bias parameter;
step S44: updating the hidden state of the enhanced image data characteristics of the gating unit, wherein the calculation formula is as follows:
h t =Update t ⊙h t-1 +(1-Update t )⊙Cell t
in the formula, h t Is the enhanced image data feature hidden state, update at time t t Is the positioning information output of the update gate at time t, +. t-1 Is the enhanced image data feature hiding state at the last time t-1, cell t The gating unit memorizes the state at time t;
step S45: training the automobile part position positioning Model, namely training the automobile part position positioning Model through the updating gate construction, the resetting gate construction, the calculation gate control unit memory state and the updating gate control unit enhanced image data characteristic hiding state to obtain an automobile part position positioning Model APP
Step S46: positioning of the position of a vehicle component, in particular using a Model of the positioning of the position of the vehicle component APP Enhancement data set I for defect detection of automobile parts S Number of enhanced images in (a)According to the position determination of the automobile part, obtaining the position positioning data D of the automobile part PD
Further, in step S5, the defect classification of the automobile part, specifically, the method of combining the deep neural network with the countermeasure training optimization model is adopted to detect the defect of the automobile part and enhance the data set I S The method for classifying the defects of the automobile parts by the enhanced image data comprises the following steps:
Step S51: constructing a discriminant network, specifically training a discriminant D through a pre-trained DenseNet network, and enhancing a data set I from the defect detection of the automobile parts S Randomly selecting 70% of the enhanced image data in the image data as training images, and randomly selecting 30% of the enhanced image data as an actual image I;
step S52: constructing a generator network, specifically adopting a plurality of loss function optimization model training, comprising the following steps:
step S521: using a mean square error loss function L 1 And optimizing the error between the generated image and the actual image, wherein the calculation formula is as follows:
wherein L is 1 Is a mean square error loss function, M is the number of pixel rows generating an image, N is the number of pixel columns generating an image, I is the pixel row index, j is the pixel column index, I ij Is the actual image pixel, I' ij Is to generate image pixels;
step S522: using a similarity error loss function L 2 Optimizing the quality of the generated image, wherein the calculation formula is as follows:
L 2 =β*H(I,I′);
wherein L is 2 Is a similarity error loss function, and beta is a similarity error weight, wherein the value range of beta is [ -1,1]H (I, I ') is a similarity calculation function, I is an actual image, I' is a generated image;
step S523: using predictive probability loss function L 3 Regulation ofThe overall prediction efficiency of the model is calculated by the following formula:
wherein L is 3 Is a predictive probability loss function, H (I, I ') is a similarity calculation function, I is an actual image, I' is a generated image;
step S524: constructing a minimized target loss function L, wherein the calculation formula is as follows:
L=minF(L 1 ,L 2 ,L 3 )=ω 1 ·L 12 ·L 23 ·L 3
where L is a minimized target loss function, min is a minimum calculation, F () is a target function, where F () is calculated as F (L 1 ,L 2 ,L 3 )=ω 1 ·L 12 ·L 23 ·L 3 ,ω 1 Is a mean square error weight factor, L 1 Is a mean square error loss function omega 2 Is a similarity error weight factor, L 2 Is a similarity error loss function omega 3 Is a predictive probability weight factor, L 3 Is a predictive probability loss function;
step S53: training a generator and a discriminator, specifically adopting the minimized target loss function L to optimize model training, and obtaining a generator G and a discriminator D through training;
step S54: training the automobile part defect classification Model, namely training the automobile part defect classification Model by constructing a discriminator network, constructing a generator network, training a generator and a discriminator and based on the training image to obtain an automobile part defect classification Model FC
Step S55: the defect classification of the automobile parts is specifically that the Model of the defect classification Model of the automobile parts is adopted FC Classifying the defects of the automobile parts to obtain a defect type F C The defect category includes component misalignmentAnd parts missing and parts damaged.
Further, in step S6, the defect detection of the automobile part is specifically performed by using the Model for identifying the type of the automobile part API Obtaining the identification data D of the type of the automobile parts IC By using the Model for positioning the position of the automobile parts APP Obtaining the position and positioning data D of the automobile parts PD And by employing the Model of the defect classification of the automobile parts FC Obtaining defect class F C And by combining the vehicle part type identification data D IC The position and positioning data D of the automobile parts PD And the defect class F C Obtaining the defect detection data D of the automobile parts F
The invention provides an automobile part defect detection system based on deep learning, which comprises a data acquisition module, a data preprocessing module, an automobile part type identification module, an automobile part position locating module, an automobile part defect classification module and an automobile part defect detection module, wherein the data acquisition module is used for acquiring data of the automobile part type identification module;
the data acquisition module is used for acquiring data;
the data preprocessing module is used for preprocessing data;
The automobile part type identification module is used for identifying the type of the automobile part;
the automobile part position positioning module is used for positioning the positions of automobile parts;
the automobile part defect classification module is used for classifying automobile part defects;
the automobile part defect detection module is used for detecting defects of automobile parts;
the data acquisition module acquires an automobile key part image from the automobile multi-view image to obtain an automobile part defect detection original data set, and sends the automobile part defect detection original data set to the data preprocessing module;
the data preprocessing module receives an original data set for detecting the defects of the automobile parts from the data acquisition module, performs denoising, oversampling, data splicing and normalization on the original data set for detecting the defects of the automobile parts to obtain an enhanced data set for detecting the defects of the automobile parts, and sends the enhanced data set for detecting the defects of the automobile parts to the automobile part type identification module, the automobile part position positioning module and the automobile part defect classification module;
the automobile part type identification module receives the automobile part defect detection enhancement data set from the data preprocessing module, performs automobile part type identification operation on the automobile part defect detection enhancement data set to obtain automobile part type identification data, and sends the automobile part type identification data to the automobile part defect detection module;
The automobile part position locating module receives the automobile part defect detection enhancement data set from the data preprocessing module, carries out automobile part position locating operation on the automobile part defect detection enhancement data set to obtain automobile part position locating data, and sends the automobile part position locating data to the automobile part defect detection module;
the automobile part defect classification module receives the automobile part defect detection enhancement data set from the data preprocessing module, performs automobile part defect classification operation on the automobile part defect detection enhancement data set to obtain a defect type, and sends the defect type to the automobile part defect detection module;
the automobile part defect detection module receives the automobile part type identification data from the automobile part type identification module, receives the automobile part position locating data from the automobile part position locating module, receives the defect type from the automobile part defect classification module, and combines the automobile part type identification data, the automobile part position locating data and the defect type to obtain the automobile part defect detection data.
By adopting the scheme, the beneficial effects obtained by the invention are as follows:
(1) Aiming at the technical problems that in the existing automobile part defect detection method, parts are found from real image data and classified according to experience by manpower, the efficiency is low, false alarm and omission are easy to occur due to the influence of pressure, eyesight and other human factors of related technicians, the method based on the convolutional neural network of the area candidate network is creatively adopted to identify the types of the automobile parts, the types of the automobile parts are automatically identified, and the automation degree and the overall accuracy of the system are improved;
(2) Aiming at the technical problems that in the existing automobile part defect detection method, automatic detection is often influenced by the complexity of an automobile structure, weather conditions and illumination change factors, the positioning accuracy of automobile parts is low, and the follow-up defect detection and classification can be negatively influenced, the method of combining a gating unit with a circulating neural network is creatively adopted in the scheme to perform context information feature fusion on automobile part defect detection data, perform automobile part position positioning, more intuitively detect the position defects of the automobile parts, and improve the overall usability of the system;
(3) Aiming at the technical problems that the existing automobile part defect detection method has fewer types which can be detected by automatic defect detection and can not effectively meet the detection requirement of automobile part defects, the method creatively adopts a deep neural network combined with an countermeasure training optimization model to classify the automobile part defects into three categories of part dislocation, part deletion and part damage, meets the detection requirement concerned by the automobile part defect detection, and further improves the detection classification effectiveness and the detection depth of the whole system.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting defects of automobile parts based on deep learning;
FIG. 2 is a schematic diagram of an automobile part defect detection system based on deep learning provided by the invention;
FIG. 3 is a data flow diagram of the method for detecting defects of automobile parts based on deep learning provided by the invention;
FIG. 4 is a flow chart of step S3;
FIG. 5 is a flow chart of step S4;
fig. 6 is a flow chart of step S5.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate orientation or positional relationships based on those shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
Referring to fig. 1, the method for detecting defects of an automobile part based on deep learning provided by the invention comprises the following steps:
step S1: obtaining data;
Step S2: preprocessing data;
step S3: identifying the type of the automobile parts;
step S4: positioning the positions of automobile parts;
step S5: classifying defects of automobile parts;
step S6: and detecting defects of automobile parts.
In a second embodiment, referring to fig. 1, in step S1, the data acquisition, specifically, acquiring an image of a key part of an automobile from a multi-view image of the automobile, to obtain an original data set I for detecting defects of an automobile part O
Embodiment III, referring to FIGS. 1, 2 and 3, based on the above embodiment, in step S2, the data preprocessing, specifically, the original data set I for detecting defects of the automobile parts O The data in the process are subjected to denoising, oversampling, data splicing and normalization operation to obtain an enhanced data set I for detecting the defects of the automobile parts S
In step S3, the vehicle part type identification, specifically, the method of using a convolutional neural network based on a region candidate network, is used to detect and enhance the data set I for the vehicle part defect, referring to fig. 1, 2, 3 and 4, which are based on the above embodiments S The method for identifying the type of the automobile part by the enhanced image data comprises the following steps:
Step S31: constructing a basic convolution network layer, specifically adopting a pre-trained ResNet network as a basic network, and carrying out defect detection enhancement data set I on the automobile parts S Is used as part image input data I to be identified Input And extracting the part image input data I to be identified through the basic network Input Is characterized by obtaining a feature image I of the part to be identified F
Step S32: the method comprises the steps of constructing a region candidate network layer, specifically connecting the region candidate network after the basic network, obtaining a candidate region through anchor block generating operation, wherein the anchor block generating operation comprises the following steps:
step S321: window sliding operation, in particular according to the feature image I of the part to be identified F In the window sliding operation, the calculation formula of the window center coordinate is:
wherein x is i′ Is the window center horizontal coordinate, y i′ Is the window center vertical coordinate, s is the window sliding stride parameter, i' is the window index, mod is the modulo operator, w is the window width, and// is the rounding divide operator;
step S322: calculating the anchor frame width w k The calculation formula is as follows:
wherein w is k Is the anchor frame width, r k Is the anchor frame aspect ratio, s l Is a scale parameter;
step S323: calculating the anchor point frame height h k The calculation formula is as follows:
in the formula, h k Is the anchor frame height, r k Is the anchor frame aspect ratio, s l Is a scale parameter;
step S324: calculating anchor point frame coordinates Anchor Box, wherein a calculation formula is as follows:
wherein Anchor box is anchor frame coordinates, x i′ Is the window center horizontal coordinate, yi' is the window center vertical coordinate, w k Is the anchor frame width, h k Is the anchor point frame height;
step S325: generating an anchor frame, in particular by calculating the width w of the anchor frame k The anchor point frame height h k Generating an anchor block by the anchor block coordinate Anchor Box, and obtaining a candidate region by generating the anchor block;
step (a)S33: constructing a region classification network, specifically constructing a region of interest pool, and normalizing the size of the candidate region through the region of interest pool to obtain a fixed specification feature map I ROI The step of constructing the region of interest pool comprises the following steps:
step S331: calculating the size of the pooled region of the region of interest pool, wherein the calculation formula is as follows:
in which W is IN Is the candidate region width, H IN Is the candidate area height, W OUT Is the width of the pooling area, H OUT Is the pooling region height, round () is the rounding operator, and p is the pooling parameter;
Step S332: calculating the position of the pooled region of the region of interest pool, wherein the calculation formula is as follows:
wherein x is IN Is the horizontal coordinate of the candidate region, y IN Is the vertical coordinate of the candidate region, x OUT Is the horizontal coordinate of the pooled region, y OUT Is the pool area vertical coordinate, round () is the rounding operator, p is the pool argument;
step S34: constructing a full connection layer, in particular to the fixed specification characteristic diagram I ROI Using the full connection layer to conduct category prediction to obtain the initial category of part identification;
step S35: the component identification screening is specifically that the component identification initial category is screened through a non-maximum suppression algorithm, and the calculation formula of the non-maximum suppression algorithm is as follows:
wherein IoU is the overnon-maximum suppressionCandidate region overlap calculated by the algorithm, area (Box 1 ∩Box 2 ) Is the intersection Area of the 1 st candidate Area and the 2 nd candidate Area selected by the non-maximum suppression algorithm, area (Box 1 ∪Box 2 ) Is the total union area of the 1 st candidate area and the 2 nd candidate area selected by the non-maximum suppression algorithm;
step S36: training an automobile part type recognition Model, namely training the automobile part type recognition Model through the construction basic convolution network layer, the construction area candidate network layer, the construction area classification network, the construction full-connection layer and the part recognition screening to obtain an automobile part type recognition Model API
Step S37: identification of the type of a vehicle component, in particular using said Model for identifying the type of a vehicle component API Enhancement data set I for defect detection of automobile parts S The enhanced image data in the model are subjected to automobile part type identification and classification to obtain automobile part type identification data D IC
By executing the operation, aiming at the technical problems that in the existing automobile part defect detection method, parts are found from real image data and classified according to experience by manpower, the efficiency is low, false alarm and omission are easy to occur due to the influence of pressure, eyesight and other human factors of related technicians, the automobile part type identification is creatively carried out by adopting a convolutional neural network method based on a region candidate network, the type identification of the automobile parts is automatically carried out, and the automation degree and the overall accuracy of the system are improved.
An embodiment five, referring to fig. 1, fig. 2, fig. 3 and fig. 5, is based on the foregoing embodiment, and in step S4, the positioning of the position of the automobile part, specifically, the method of combining the gating unit with the recurrent neural network is adopted to enhance the data set I for detecting the defects of the automobile part S The method comprises the following steps of:
step S41: the update gate is constructed, and the calculation formula is as follows:
Update t =σ(θ U [m t ,h t-1 ]+b U );
in Update t Is the output of the positioning information of the update gate at time t, sigma () is an S-shaped function, theta U Is to update the learning weight of the door, m t Is the defect detection enhancement data set I of the automobile parts S Positioning context information, h, of enhanced image data in (a) at time t t-1 Is the enhanced image data feature hidden state at the last time t-1, b U Updating the gate bias parameter;
step S42: the reset gate is constructed, and the calculation formula is as follows:
Reset t =σ(θ R [m t ,h t-1 ]+b R );
in the formula, reset t Is the positioning information output of the reset gate at time t, sigma () is an S-shaped function, theta R Is to reset the learning weight of the door, m t Is the defect detection enhancement data set I of the automobile parts S Positioning context information, h, of enhanced image data in (a) at time t t-1 Is the enhanced image data feature hidden state at the last time t-1, b R Is a reset gate bias parameter;
step S43: calculating the memory state of the gating unit, wherein the calculation formula is as follows:
Cell t =tanh(θ C [m t ,(Reset t ⊙h t-1 )]+b C );
in the formula, cell t Is the gating cell memory state at time t, reset t Is the positioning information output of the reset gate at time t, tanh () is the hyperbolic tangent function, θ C Is the memory state weight, m t Is the defect detection enhancement data set I of the automobile parts S The positioning context information of the enhanced image data in (a) at time t, +. t-1 Is the enhanced image data feature hidden state at the last time t-1, b C Is a memory state bias parameter;
step S44: updating the hidden state of the enhanced image data characteristics of the gating unit, wherein the calculation formula is as follows:
h t =Update t ⊙h t-1 +(1-Update t )⊙Cell t
in the formula, h t Is the enhanced image data feature hidden state, update at time t t Is the positioning information output of the update gate at time t, +. t-1 Is the enhanced image data feature hiding state at the last time t-1, cell t The gating unit memorizes the state at time t;
step S45: training the automobile part position positioning Model, namely training the automobile part position positioning Model through the updating gate construction, the resetting gate construction, the calculation gate control unit memory state and the updating gate control unit enhanced image data characteristic hiding state to obtain an automobile part position positioning Model APP
Step S46: positioning of the position of a vehicle component, in particular using a Model of the positioning of the position of the vehicle component APP Enhancement data set I for defect detection of automobile parts S The enhanced image data in the process is used for determining the position of the automobile part to obtain the position positioning data D of the automobile part PD
By executing the operation, the technical problems that in the existing automobile part defect detection method, automatic detection is often influenced by complexity of an automobile structure, weather conditions and illumination change factors, the positioning accuracy of automobile parts is low, and negative effects are caused on subsequent defect detection and classification are solved.
Embodiment six, referring to fig. 1, 2, 3 and 6, based on the above embodiment, in step S5, the defect classification of the automobile parts is specificallyMethod for detecting and enhancing data set I of defects of automobile parts by combining deep neural network with countermeasure training optimization model S The method for classifying the defects of the automobile parts by the enhanced image data comprises the following steps:
Step S51: constructing a discriminant network, specifically training a discriminant D through a pre-trained DenseNet network, and enhancing a data set I from the defect detection of the automobile parts S Randomly selecting 70% of the enhanced image data in the image data as training images, and randomly selecting 30% of the enhanced image data as an actual image I;
step S52: constructing a generator network, specifically adopting a plurality of loss function optimization model training, comprising the following steps:
step S521: using a mean square error loss function L 1 And optimizing the error between the generated image and the actual image, wherein the calculation formula is as follows:
wherein L is 1 Is a mean square error loss function, M is the number of pixel rows generating an image, N is the number of pixel columns generating an image, I is the pixel row index, j is the pixel column index, I ij Is the actual image pixel, I' ij Is to generate image pixels;
step S522: using a similarity error loss function L 2 Optimizing the quality of the generated image, wherein the calculation formula is as follows:
L 2 =β*HH(I,I′);
wherein L is 2 Is a similarity error loss function, and beta is a similarity error weight, wherein the value range of beta is [ -1,1]H (I, I ') is a similarity calculation function, I is an actual image, I' is a generated image;
step S523: using predictive probability loss function L 3 The overall prediction efficiency of the regulation model is calculated according to the following formula:
wherein L is 3 Is a predictive probability loss function, H (I, I ') is a similarity calculation function, I is an actual image, I' is a generated image;
step S524: constructing a minimized target loss function L, wherein the calculation formula is as follows:
L=minF(L 1 ,L 2 ,L 3 )=ω 1 ·L 12 ·L 23 ·L 3
where L is a minimized target loss function, min is a minimum calculation, F () is a target function, where F () is calculated as F (L 1 ,L 2 ,L 3 )=ω 1 ·L 12 ·L 23 ·L 3 ,ω 1 Is a mean square error weight factor, L 1 Is a mean square error loss function omega 2 Is a similarity error weight factor, L 2 Is a similarity error loss function omega 3 Is a predictive probability weight factor, L 3 Is a predictive probability loss function;
step S53: training a generator and a discriminator, specifically adopting the minimized target loss function L to optimize model training, and obtaining a generator G and a discriminator D through training;
step S54: training the automobile part defect classification Model, namely training the automobile part defect classification Model by constructing a discriminator network, constructing a generator network, training a generator and a discriminator and based on the training image to obtain an automobile part defect classification Model FC
Step S55: the defect classification of the automobile parts is specifically that the Model of the defect classification Model of the automobile parts is adopted FC Classifying the defects of the automobile parts to obtain a defect type F C The defect categories include component misalignment, component missing, and component damage;
by executing the operation, aiming at the technical problems that in the existing automobile part defect detection method, the types which can be detected by automatic defect detection are fewer and the detection requirement of the automobile part defect cannot be effectively met, the method of combining the deep neural network with the countermeasure training optimization model is creatively adopted to classify the automobile part defect into three categories of part dislocation, part missing and part damage, the detection requirement concerned by the automobile part defect detection is met, and the detection classification effectiveness and the detection depth of the whole system are further improved.
Embodiment seven, referring to fig. 1, 2 and 3, the embodiment is based on the above embodiment, further, in step S6, the defect detection of the automobile part is specifically performed by using the Model for identifying the type of the automobile part API Obtaining the identification data D of the type of the automobile parts IC By using the Model for positioning the position of the automobile parts APP Obtaining the position and positioning data D of the automobile parts PD And by employing the Model of the defect classification of the automobile parts FC Obtaining defect class F C And by combining the vehicle part type identification data D IC The position and positioning data D of the automobile parts PD And the defect class F C Obtaining the defect detection data D of the automobile parts F
An eighth embodiment, referring to fig. 2 and fig. 3, is based on the foregoing embodiment, and the system for detecting defects of automotive parts based on deep learning provided by the present invention includes a data acquisition module, a data preprocessing module, an automotive part type identification module, an automotive part location module, an automotive part defect classification module, and an automotive part defect detection module.
The invention provides an automobile part defect detection system based on deep learning, which comprises a data acquisition module, a data preprocessing module, a fault monitoring model construction module, a throughput prediction module, a fault monitoring and alarming module and a fault monitoring data report generation module, wherein the data acquisition module is used for acquiring the data of the automobile part;
the data acquisition module is used for acquiring data;
the data preprocessing module is used for preprocessing data;
the automobile part type identification module is used for identifying the type of the automobile part;
The automobile part position positioning module is used for positioning the positions of automobile parts;
the automobile part defect classification module is used for classifying automobile part defects;
the automobile part defect detection module is used for detecting defects of automobile parts.
The data acquisition module acquires an automobile key part image from the automobile multi-view image to obtain an automobile part defect detection original data set, and sends the automobile part defect detection original data set to the data preprocessing module;
the data preprocessing module receives an original data set for detecting the defects of the automobile parts from the data acquisition module, performs denoising, oversampling, data splicing and normalization on the original data set for detecting the defects of the automobile parts to obtain an enhanced data set for detecting the defects of the automobile parts, and sends the enhanced data set for detecting the defects of the automobile parts to the automobile part type identification module, the automobile part position positioning module and the automobile part defect classification module;
the automobile part type identification module receives the automobile part defect detection enhancement data set from the data preprocessing module, performs automobile part type identification operation on the automobile part defect detection enhancement data set to obtain automobile part type identification data, and sends the automobile part type identification data to the automobile part defect detection module;
The automobile part position locating module receives the automobile part defect detection enhancement data set from the data preprocessing module, carries out automobile part position locating operation on the automobile part defect detection enhancement data set to obtain automobile part position locating data, and sends the automobile part position locating data to the automobile part defect detection module;
the automobile part defect classification module receives the automobile part defect detection enhancement data set from the data preprocessing module, performs automobile part defect classification operation on the automobile part defect detection enhancement data set to obtain a defect type, and sends the defect type to the automobile part defect detection module;
the automobile part defect detection module receives the automobile part type identification data from the automobile part type identification module, receives the automobile part position locating data from the automobile part position locating module, receives the defect type from the automobile part defect classification module, and combines the automobile part type identification data, the automobile part position locating data and the defect type to obtain the automobile part defect detection data.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.

Claims (5)

1. A method for detecting defects of automobile parts based on deep learning is characterized by comprising the following steps: the method comprises the following steps:
step S1: obtaining data;
step S2: preprocessing data;
step S3: identifying the type of the automobile parts;
step S4: positioning the positions of automobile parts;
step S5: classifying defects of automobile parts;
step S6: detecting defects of automobile parts;
in step S1, an original data set I for detecting defects of an automobile part is obtained by data acquisition, specifically, acquiring an image of a key part of an automobile from a multi-view image of the automobile O
In step S2, the original data set I of the defect detection of the automobile part is processed through data preprocessing, specifically O The data in the process are subjected to denoising, oversampling, data splicing and normalization operation to obtain an enhanced data set I for detecting the defects of the automobile parts S
In step S3, the type of the automobile part is identified, specifically, a convolutional neural network method based on a region candidate network is adopted to detect and enhance the data set I for the defect of the automobile part S The method for identifying the type of the automobile part by the enhanced image data comprises the following steps:
step S31: constructing a basic convolution network layer, specifically adopting a pre-trained ResNet network as a basic network, and carrying out defect detection enhancement data set I on the automobile parts S Is used as part image input data I to be identified Input And extracting the part image input data I to be identified through the basic network Input Is characterized by obtaining a feature image I of the part to be identified F
Step S32: the method comprises the steps of constructing a region candidate network layer, specifically connecting the region candidate network after the basic network, obtaining a candidate region through anchor block generating operation, wherein the anchor block generating operation comprises the following steps:
step S321: window sliding operation, in particular according to the feature image I of the part to be identified F In the window sliding operation, the calculation formula of the window center coordinate is:
wherein x is i′ Is the window center horizontal coordinate, y i′ Is the window center vertical coordinate, s is the window sliding stride parameter, i' is the window index, mod is the modulo operator, w is the window width, and// is the rounding divide operator;
step S322: calculating the anchor frame width w k The calculation formula is as follows:
wherein w is k Is the anchor frame width, r k Is the anchor frame aspect ratio, s 1 Is a scale parameter;
step S323: calculating the anchor point frame height h k The calculation formula is as follows:
in the formula, h k Is the anchor frame height, r k Is the anchor frame aspect ratio, s 1 Is a scale parameter;
step S324: calculating anchor point frame coordinates Anchor Box, wherein a calculation formula is as follows:
wherein Anchor box is anchor frame coordinates, x i′ Is the window center horizontal coordinate, y i′ Is the vertical coordinate of the center of the window, w k Is the anchor frame width, h k Is the anchor point frame height;
step S325: generating an anchor frame, in particular by calculating the width w of the anchor frame k The anchor point frame height h k Generating an anchor block by the anchor block coordinate Anchor Box, and obtaining a candidate region by generating the anchor block;
step S33: constructing a region classification network, specifically constructing a region of interest pool, and normalizing the size of the candidate region through the region of interest pool to obtain a fixed specification feature map I ROI The step of constructing the region of interest pool comprises the following steps:
step S331: calculating the size of the pooled region of the region of interest pool, wherein the calculation formula is as follows:
in which W is IN Is the candidate region width, H IN Is the candidate area height, W OUT Is the width of the pooling area, H OUT Is the pooling region height, round () is the rounding operator, and p is the pooling parameter;
step S332: calculating the position of the pooled region of the region of interest pool, wherein the calculation formula is as follows:
wherein x is IN Is the horizontal coordinate of the candidate region, y IN Is the vertical coordinate of the candidate region, x OUT Is the horizontal coordinate of the pooled region, y OUT Is the pool area vertical coordinate, round () is the rounding operator, p is the pool argument;
step S34: constructing a full connection layer, in particular to the fixed specification characteristic diagram I ROI Using the full connection layer to conduct category prediction to obtain the initial category of part identification;
step S35: the component identification screening is specifically that the component identification initial category is screened through a non-maximum suppression algorithm, and the calculation formula of the non-maximum suppression algorithm is as follows:
wherein IoU is the overlap of the candidate regions calculated by the non-maximum suppression algorithm, area (Box 1 ∩Box 2 ) Is the intersection Area of the 1 st candidate Area and the 2 nd candidate Area selected by the non-maximum suppression algorithm, area (Box 1 ∪Box 2 ) Is the total union area of the 1 st candidate area and the 2 nd candidate area selected by the non-maximum suppression algorithm;
step S36: training an automobile part type recognition Model, namely training the automobile part type recognition Model through the construction basic convolution network layer, the construction area candidate network layer, the construction area classification network, the construction full-connection layer and the part recognition screening to obtain an automobile part type recognition Model API
Step S37: identification of the type of a vehicle component, in particular using said Model for identifying the type of a vehicle component API Enhancement data set I for defect detection of automobile parts S The enhanced image data in the model are subjected to automobile part type identification and classification to obtain automobile part type identification data D IC
In step S4, the positioning of the position of the automobile part, specifically, the method of combining the gating unit with the recurrent neural network is adopted to detect the defect of the automobile part and enhance the data set I s The method comprises the following steps of:
step S41: the update gate is constructed, and the calculation formula is as follows:
Update t =σ(θ U [m t ,h t-1 ]+b U );
in Update t Is the output of the positioning information of the update gate at time t, sigma () is an S-shaped function, theta Is to update the learning weight of the door, m t Is the defect detection enhancement data set I of the automobile parts S Positioning context information, h, of enhanced image data in (a) at time t t-1 Is the enhanced image data feature hidden state at the last time t-1, b U Updating the gate bias parameter;
step S42: the reset gate is constructed, and the calculation formula is as follows:
Reset t =σ(θ R [m t ,h t-1 ]+b R );
in the formula, reset t Is the positioning information output of the reset gate at time t, sigma () is an S-shaped function, theta R Is to reset the learning weight of the door, m t Is the defect detection enhancement data set I of the automobile parts S Positioning context information, h, of enhanced image data in (a) at time t t-1 Is the enhanced image data feature hidden state at the last time t-1, b R Is a reset gate bias parameter;
step S43: calculating the memory state of the gating unit, wherein the calculation formula is as follows:
Cell t =tanh(θ C [m t ,(Reset t ⊙h t-1 )]+b C );
in the formula, cell t Is the gating cell memory state at time t, reset t Is the positioning information output of the reset gate at time t, tanh () is the hyperbolic tangent function, θ C Is the memory state weight, m t Is the defect detection enhancement data set I of the automobile parts S The positioning context information of the enhanced image data in (a) at time t, +. t-1 Is the enhanced image data feature hidden state at the last time t-1, b C Is a memory state bias parameter;
step S44: updating the hidden state of the enhanced image data characteristics of the gating unit, wherein the calculation formula is as follows:
h t =Update t ⊙h t-1 +(1-Update t )⊙Cell t
in the formula, h t Is the enhanced image data feature hidden state, update at time t t Is the positioning information output of the update gate at time t, +. t-1 Is the enhanced image data feature hiding state at the last time t-1, cell t The gating unit memorizes the state at time t;
step S45: training the automobile part position positioning Model, namely training the automobile part position positioning Model through the updating gate construction, the resetting gate construction, the calculation gate control unit memory state and the updating gate control unit enhanced image data characteristic hiding state to obtain an automobile part position positioning Model APP
Step S46: positioning of the position of a vehicle component, in particular using a Model of the positioning of the position of the vehicle component APP Enhancement data set I for defect detection of automobile parts S The enhanced image data in the process is used for determining the position of the automobile part to obtain the position positioning data D of the automobile part PD
2. The method for detecting defects of automobile parts based on deep learning according to claim 1, wherein the method comprises the following steps: in step S5, the defect classification of the automobile part, specifically, the method of combining the deep neural network with the countermeasure training optimization model is adopted to detect the defect of the automobile part and enhance the data set I S The method for classifying the defects of the automobile parts by the enhanced image data comprises the following steps:
step S51: constructing a discriminant network, specifically training a discriminant D through a pre-trained DenseNet network, and enhancing a data set I from the defect detection of the automobile parts S Randomly selecting 70% of the enhanced image data in the image data as training images, and randomly selecting 30% of the enhanced image data as an actual image I;
step S52: constructing a generator network, specifically adopting a plurality of loss function optimization model training, comprising the following steps:
step S521: using a mean square error loss function L 1 And optimizing the error between the generated image and the actual image, wherein the calculation formula is as follows:
wherein L is 1 Is a mean square error loss function, M is the number of pixel rows generating an image, N is the number of pixel columns generating an image, I is the pixel row index, j is the pixel column index, I ij Is the actual image pixel, I' ij Is to generate image pixels;
step S522: using a similarity error loss function L 2 Optimizing the quality of the generated image, wherein the calculation formula is as follows:
L 2 =β*H(I,I′);
wherein L is 2 Is a similarity error loss function, and beta is a similarity error weight, wherein the value range of beta is [ -1,1]H (I, I ') is a similarity calculation function, I is an actual image, I' is a generated image;
step S523: the overall prediction efficiency of the model is adjusted by adopting a prediction probability loss function L3, and the calculation formula is as follows:
wherein L is 3 Is a predictive probability loss function, H (I, I ') is a similarity calculation function, I is an actual image, I' is a generated image;
Step S524: constructing a minimized target loss function L, wherein the calculation formula is as follows:
L=minF(L 1 ,L 2 ,L 3 )=ω 1 ·L 12 •L 23 •L 3
wherein L is a minimized target loss function, min is a minimum calculation, F () is a target function, wherein the calculation formula of F () isF(L 1 ,L 2 ,L 3 )=ω 1 ·L 12 ·L 23 •L 3 ,ω 1 Is a mean square error weight factor, L 1 Is a mean square error loss function omega 2 Is a similarity error weight factor, L 2 Is a similarity error loss function omega 3 Is a predictive probability weight factor, L 3 Is a predictive probability loss function;
step S53: training a generator and a discriminator, specifically adopting the minimized target loss function L to optimize model training, and obtaining a generator G and a discriminator D through training;
step S54: training the automobile part defect classification Model, namely training the automobile part defect classification Model by constructing a discriminator network, constructing a generator network, training a generator and a discriminator and based on the training image to obtain an automobile part defect classification Model FC
Step S55: the defect classification of the automobile parts is specifically that the Model of the defect classification Model of the automobile parts is adopted FC Classifying the defects of the automobile parts to obtain a defect type F C The defect categories include component misalignment, component missing, and component damage.
3. The method for detecting defects of automobile parts based on deep learning according to claim 2, wherein the method comprises the following steps: in step S6, the defect detection of the automobile part is specifically performed by using the Model for identifying the type of the automobile part API Obtaining the identification data D of the type of the automobile parts IC By using the Model for positioning the position of the automobile parts APP Obtaining the position and positioning data D of the automobile parts PD And by employing the Model of the defect classification of the automobile parts FC Obtaining defect class F C And by combining the vehicle part type identification data D IC The position and positioning data D of the automobile parts PD And the defect class F C Obtaining the defect detection data D of the automobile parts F
4. A deep learning-based automobile part defect detection system for implementing the deep learning-based automobile part defect detection method as set forth in any one of claims 1 to 3, wherein: the system comprises a data acquisition module, a data preprocessing module, an automobile part type identification module, an automobile part position locating module, an automobile part defect classification module and an automobile part defect detection module;
the data acquisition module is used for acquiring data;
The data preprocessing module is used for preprocessing data;
the automobile part type identification module is used for identifying the type of the automobile part;
the automobile part position positioning module is used for positioning the positions of automobile parts;
the automobile part defect classification module is used for classifying automobile part defects;
the automobile part defect detection module is used for detecting defects of automobile parts.
5. The deep learning-based automotive part defect detection system of claim 4, wherein: the data acquisition module acquires an automobile key part image from the automobile multi-view image to obtain an automobile part defect detection original data set, and sends the automobile part defect detection original data set to the data preprocessing module;
the data preprocessing module receives an original data set for detecting the defects of the automobile parts from the data acquisition module, performs denoising, oversampling, data splicing and normalization on the original data set for detecting the defects of the automobile parts to obtain an enhanced data set for detecting the defects of the automobile parts, and sends the enhanced data set for detecting the defects of the automobile parts to the automobile part type identification module, the automobile part position positioning module and the automobile part defect classification module;
The automobile part type identification module receives the automobile part defect detection enhancement data set from the data preprocessing module, performs automobile part type identification operation on the automobile part defect detection enhancement data set to obtain automobile part type identification data, and sends the automobile part type identification data to the automobile part defect detection module;
the automobile part position locating module receives the automobile part defect detection enhancement data set from the data preprocessing module, carries out automobile part position locating operation on the automobile part defect detection enhancement data set to obtain automobile part position locating data, and sends the automobile part position locating data to the automobile part defect detection module;
the automobile part defect classification module receives the automobile part defect detection enhancement data set from the data preprocessing module, performs automobile part defect classification operation on the automobile part defect detection enhancement data set to obtain a defect type, and sends the defect type to the automobile part defect detection module;
the automobile part defect detection module receives the automobile part type identification data from the automobile part type identification module, receives the automobile part position locating data from the automobile part position locating module, receives the defect type from the automobile part defect classification module, and combines the automobile part type identification data, the automobile part position locating data and the defect type to obtain the automobile part defect detection data.
CN202311142490.4A 2023-09-06 2023-09-06 Deep learning-based automobile part defect detection method and system Active CN116883763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311142490.4A CN116883763B (en) 2023-09-06 2023-09-06 Deep learning-based automobile part defect detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311142490.4A CN116883763B (en) 2023-09-06 2023-09-06 Deep learning-based automobile part defect detection method and system

Publications (2)

Publication Number Publication Date
CN116883763A CN116883763A (en) 2023-10-13
CN116883763B true CN116883763B (en) 2023-12-12

Family

ID=88272000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311142490.4A Active CN116883763B (en) 2023-09-06 2023-09-06 Deep learning-based automobile part defect detection method and system

Country Status (1)

Country Link
CN (1) CN116883763B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117236809B (en) * 2023-11-13 2024-02-27 宁德市天铭新能源汽车配件有限公司 Automobile part production management method and system based on artificial intelligence
CN117351006B (en) * 2023-12-04 2024-02-02 深圳玖逸行新能源汽车技术有限公司 Deep learning-based method and system for detecting surface defects of automobile sheet metal part

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN109300114A (en) * 2018-08-30 2019-02-01 西南交通大学 The minimum target components of high iron catenary support device hold out against missing detection method
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
CN110533725A (en) * 2019-09-06 2019-12-03 西南交通大学 A kind of a variety of position components methods of high iron catenary based on structure inferring network
CN110555842A (en) * 2019-09-10 2019-12-10 太原科技大学 Silicon wafer image defect detection method based on anchor point set optimization
CN111383273A (en) * 2020-03-07 2020-07-07 西南交通大学 High-speed rail contact net part positioning method based on improved structure reasoning network
WO2020173036A1 (en) * 2019-02-26 2020-09-03 博众精工科技股份有限公司 Localization method and system based on deep learning
CN112686372A (en) * 2020-12-28 2021-04-20 哈尔滨工业大学(威海) Product performance prediction method based on depth residual GRU neural network
CN114118282A (en) * 2021-12-01 2022-03-01 盐城工学院 Bearing fault diagnosis method based on attention mechanism and convolution gate control circulation unit model
KR20230033071A (en) * 2021-08-26 2023-03-08 충북대학교 산학협력단 Structural response estimation method using gated recurrent unit
CN116703928A (en) * 2023-08-08 2023-09-05 宁德市天铭新能源汽车配件有限公司 Automobile part production detection method and system based on machine learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN109300114A (en) * 2018-08-30 2019-02-01 西南交通大学 The minimum target components of high iron catenary support device hold out against missing detection method
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
WO2020173036A1 (en) * 2019-02-26 2020-09-03 博众精工科技股份有限公司 Localization method and system based on deep learning
CN110533725A (en) * 2019-09-06 2019-12-03 西南交通大学 A kind of a variety of position components methods of high iron catenary based on structure inferring network
CN110555842A (en) * 2019-09-10 2019-12-10 太原科技大学 Silicon wafer image defect detection method based on anchor point set optimization
CN111383273A (en) * 2020-03-07 2020-07-07 西南交通大学 High-speed rail contact net part positioning method based on improved structure reasoning network
CN112686372A (en) * 2020-12-28 2021-04-20 哈尔滨工业大学(威海) Product performance prediction method based on depth residual GRU neural network
KR20230033071A (en) * 2021-08-26 2023-03-08 충북대학교 산학협력단 Structural response estimation method using gated recurrent unit
CN114118282A (en) * 2021-12-01 2022-03-01 盐城工学院 Bearing fault diagnosis method based on attention mechanism and convolution gate control circulation unit model
CN116703928A (en) * 2023-08-08 2023-09-05 宁德市天铭新能源汽车配件有限公司 Automobile part production detection method and system based on machine learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种新型DSCNN-GRU结构的减速机轴承故障诊断方法;汪洋 等;;机械科学与技术(第02期);全文 *
基于机器视觉和深度神经网络的零件装配检测;魏中雨 等;;组合机床与自动化加工技术(第03期);全文 *
基于深度学习的机器视觉目标检测算法及在票据检测中应用;刘桂雄 等;;中国测试(第05期);全文 *
基于深度学习的零件实例分割识别研究;黄海松 等;;组合机床与自动化加工技术(第05期);全文 *

Also Published As

Publication number Publication date
CN116883763A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN116883763B (en) Deep learning-based automobile part defect detection method and system
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN112288008B (en) Mosaic multispectral image disguised target detection method based on deep learning
CN111080620A (en) Road disease detection method based on deep learning
CA3145241A1 (en) Machine learning systems and methods for improved localization of image forgery
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN106815576B (en) Target tracking method based on continuous space-time confidence map and semi-supervised extreme learning machine
CN111091101A (en) High-precision pedestrian detection method, system and device based on one-step method
CN111079518A (en) Fall-down abnormal behavior identification method based on scene of law enforcement and case handling area
CN111833353B (en) Hyperspectral target detection method based on image segmentation
CN111738164B (en) Pedestrian detection method based on deep learning
CN117746077A (en) Chip defect detection method, device, equipment and storage medium
CN117197682B (en) Method for blind pixel detection and removal by long-wave infrared remote sensing image
CN115082909A (en) Lung lesion identification method and system
CN111626102B (en) Bimodal iterative denoising anomaly detection method and terminal based on video weak marker
CN114927236A (en) Detection method and system for multiple target images
Flewelling et al. Information theoretic weighting for robust star centroiding
CN113989742A (en) Nuclear power station plant pedestrian detection method based on multi-scale feature fusion
CN118097517B (en) Self-supervision video anomaly detection method based on double-stream space-time encoder
CN113971755B (en) All-weather sea surface target detection method based on improved YOLOV model
CN116452857A (en) Weak supervision remote sensing target detection method based on feature consistency and positioning update
CN117496371A (en) Tailing pond detection method
CN116343326A (en) Tunnel worker standardized construction detection method and system based on machine vision
CN116524463A (en) Deep learning parking space recognition method and system based on inclined frame target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant