CN117495712A - Method, system and equipment for enhancing generated data of vehicle body part quality model - Google Patents

Method, system and equipment for enhancing generated data of vehicle body part quality model Download PDF

Info

Publication number
CN117495712A
CN117495712A CN202410000457.6A CN202410000457A CN117495712A CN 117495712 A CN117495712 A CN 117495712A CN 202410000457 A CN202410000457 A CN 202410000457A CN 117495712 A CN117495712 A CN 117495712A
Authority
CN
China
Prior art keywords
data
point cloud
sample
body part
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410000457.6A
Other languages
Chinese (zh)
Inventor
陈小庆
季钟亮
赵跃伟
邱铁
周晓波
徐天一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Tianqi Mozhitong Vehicle Body Technology Co ltd
Original Assignee
Tianjin Tianqi Mozhitong Vehicle Body Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Tianqi Mozhitong Vehicle Body Technology Co ltd filed Critical Tianjin Tianqi Mozhitong Vehicle Body Technology Co ltd
Priority to CN202410000457.6A priority Critical patent/CN117495712A/en
Publication of CN117495712A publication Critical patent/CN117495712A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a system and equipment for enhancing generated data of a vehicle body part quality model, and relates to the field of automobile engineering, wherein the method comprises the following steps: acquiring two-dimensional image data and three-dimensional point cloud data of different body parts of an automobile, and generating real sample data; performing data enhancement processing on the two-dimensional image data to generate an enhanced image sample; performing data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample; determining generated sample data of the vehicle body part according to the enhanced image sample and the dense point cloud sample; training a body part quality model according to the real sample data and the generated sample data, and detecting different body part qualities according to the trained body part quality model. The invention can effectively reduce data unbalance caused by the production of different vehicle body parts and improve the evaluation capability of the vehicle body part quality model on different types of vehicle body parts.

Description

Method, system and equipment for enhancing generated data of vehicle body part quality model
Technical Field
The invention relates to the field of automobile engineering, in particular to a method, a system and equipment for enhancing generated data of a quality model of a vehicle body part.
Background
In the process of automobile production and manufacture, the accuracy and the reliability of a quality model of automobile body parts are important for ensuring the quality of automobile body parts, the performance, the reliability and the service life of each automobile body part can be effectively estimated and predicted, and the accuracy and the reliability of the quality model play an important role in the design and the production process of the automobile body parts. However, due to the acquisition cost, at present, most automobile production workshops adopt a sampling detection mode, and are faced with various automobile body parts with large quantity and uneven quality, the existing detection mode is difficult to provide enough data samples for a quality model of the automobile body part, and therefore learning and evaluation capabilities of the quality model are limited. In addition, conventional automotive body part inspection data tends to include similar or similar data samples, such as unified materials, unified specification body parts, which lack diversity and coverage. This limits the predictive capabilities of the body part quality model for body parts of different specifications, materials and processes. In actual automotive production, the number of samples of certain body parts may far exceed others due to imbalance in body part production, resulting in poor body part quality models for evaluating a minority class of body parts.
Disclosure of Invention
The invention aims to provide a method, a system and equipment for enhancing generated data of a vehicle body part quality model, which are used for solving the problem that the vehicle body part quality model has poor evaluating capability on different types of vehicle body parts due to unbalanced sample quantity in production of different vehicle body parts.
In order to achieve the above object, the present invention provides the following solutions:
a method of generating data enhancement for a body part quality model, comprising:
acquiring two-dimensional image data and three-dimensional point cloud data of different body parts of an automobile, and generating real sample data;
performing data enhancement processing on the two-dimensional image data to generate an enhanced image sample;
performing data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample;
determining generated sample data of the vehicle body part according to the enhanced image sample and the dense point cloud sample;
training a body part quality model according to the real sample data and the generated sample data, and detecting different body part qualities according to the trained body part quality model.
Optionally, performing data enhancement processing on the two-dimensional image data to generate an enhanced image sample, which specifically includes:
smoothing and wavelet denoising the two-dimensional image data;
performing geometric transformation, color transformation and affine transformation on the two-dimensional image data subjected to denoising treatment to generate an image sample;
generating an activation function of an countermeasure network by adopting the SeLU activation function improvement, and introducing a residual block to improve the generation of the countermeasure network;
and utilizing the image samples to train the improved generator and the relative discriminator for generating the countermeasure network alternately, and determining the enhanced image samples.
Optionally, smoothing and wavelet denoising the two-dimensional image data specifically includes:
using the formulaSmoothing and wavelet denoising the two-dimensional image data; wherein,the two-dimensional image is subjected to denoising treatment; />Is the standard deviation of the gaussian distribution; />A pixel point horizontal coordinate of the two-dimensional image data; />And the pixel point vertical coordinates of the two-dimensional image data.
Optionally, the SeLU activation function is:
wherein,scaling factors output for the SeLU activation function; />Input values for the activation function layer; />Is a negative slope parameter of the negative half-axis.
Optionally, the improved generating the countermeasure network aims at:
wherein,a Loss of Loss function for the relative arbiter; />For encouraging the arbiter to increase the true sample data +.>Is reduced to generate sample data +.>Is a score of (2); p is the true sample data->True sample data distribution obeyed; q is the generation sample data->Generating a sample data distribution; />A first difference function for distinguishing between real sample data and generating a scoring difference for the sample data; />Data about real samples given for the arbiter +.>Is a score of (2); />The discriminator is given about generating sample data +.>Is a score of (2); />Is a balance term; />A second difference function for distinguishing between the real sample data and generating a scoring difference for the sample data; />A Loss of Loss function for the generator;for measuring the discriminator->For real sample data->Relative to generating sample dataOutputting the expected value of the difference; />A third difference function for distinguishing between the real sample data and generating a scoring difference for the sample data; />For measuring the discriminator->For generating sample dataRelative to real sample data->Outputting the expected value of the difference; />A third difference function for distinguishing between the real sample data and generating a scoring difference for the sample data.
Optionally, performing data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample, which specifically includes:
resampling and translational transformation are carried out on the three-dimensional point cloud data;
performing rotation and scaling operation on the resampled and translational transformed point cloud data to generate point cloud samples under different viewing angles and scales;
performing up-sampling operation on the point cloud sample through a PU-Net algorithm to obtain point cloud characteristics with 3D detail representation;
expanding the feature space of the point cloud features based on the effective feature expansion operation of the sub-pixel convolution layer, and generating expanded point cloud features;
and performing dimension reduction operation on the expanded point cloud characteristics to generate a dense point cloud sample.
Optionally, expanding the feature space of the point cloud feature based on an effective feature expansion operation of the sub-pixel convolution layer, and generating the expanded point cloud feature specifically includes:
using the formulaExpanding the feature space of the point cloud features to generate expanded point cloud features; wherein (1)>The expanded point cloud characteristics; />Resampling strategies for point cloud features;characterizing the output of the first layer convolution; />For convolving the first layer->Again applying a second layer convolution to the output representation of (2); r is the up-sampling rate of the point cloud data, r=1, 2,3.
A generated data enhancement system for a body component quality model, comprising:
the data acquisition module is used for acquiring two-dimensional image data and three-dimensional point cloud data of different automobile body parts of the automobile and generating real sample data;
the image data enhancement module is used for carrying out data enhancement processing on the two-dimensional image data to generate an enhanced image sample;
the point cloud data enhancement module is used for carrying out data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample;
a generated sample data determining module for determining generated sample data of a vehicle body component according to the enhanced image sample and the dense point cloud sample;
the training and detecting module is used for training the body part quality model according to the real sample data and the generated sample data and detecting different body part qualities according to the trained body part quality model.
An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the method of generating data enhancement for a body part quality model as described above.
Optionally, the memory is a non-transitory computer readable storage medium, and the non-transitory computer readable storage medium stores a computer program, and the computer program when executed by the processor implements the method for generating and enhancing the quality model of the vehicle body part.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: according to the embodiment of the invention, the two-dimensional image data and the three-dimensional point cloud data of different automobile body parts of the automobile are subjected to data enhancement processing respectively, the enhanced image sample and the dense point cloud sample are generated, the generated sample data of the automobile body parts are determined, the generated sample data are utilized to expand the real sample data, and the generated sample data and the real sample data are utilized to train the automobile body part quality model together, so that the data imbalance caused by the production of different automobile body parts is effectively reduced, and the evaluation capability of the automobile body part quality model on the automobile body parts of different types is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for enhancing generated data of a quality model of a vehicle body part;
FIG. 2 is a schematic diagram of a data enhancement system for generating a model of the quality of a vehicle body part according to the present invention;
FIG. 3 is a diagram of a model of an improved generation countermeasure network implementing two-dimensional image data enhancement provided by the present invention;
fig. 4 is a model diagram of a data-driven point cloud upsampling network according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a method, a system and equipment for enhancing generated data of a vehicle body part quality model, which can effectively reduce data unbalance caused by different vehicle body part production and improve the evaluation capability of the vehicle body part quality model on different types of vehicle body parts.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1
In order to overcome the defects of the traditional automobile body part detection and meet the defects of the quantity and quality of the automobile body part quality model data of an automobile in actual production, the data enhancement technology based on the generation type is very necessary to be introduced into the research of the automobile body part quality model. In particular, generative data enhancement techniques utilize generative models (e.g., variational self-encoders, generating countermeasure networks, etc.) to generate new data samples to increase the diversity and richness of the data set. When the method is applied to automobile body part detection application, the generated automobile body part samples can have different specifications, materials and processes, and can cover wider automobile body part feature spaces. By introducing the generative data enhancement technique, the original data set can be expanded and more comprehensive and diversified training data can be provided, thereby improving the performance of the body part quality model.
As shown in fig. 1, the method for enhancing the generated data of the quality model of the vehicle body part comprises the following steps:
step 101: and acquiring two-dimensional image data and three-dimensional point cloud data of different body parts of the automobile, and generating real sample data.
Step 102: and carrying out data enhancement processing on the two-dimensional image data to generate an enhanced image sample.
Step 103: and carrying out data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample.
Step 104: and determining generated sample data of the vehicle body part according to the enhanced image sample and the dense point cloud sample.
Step 105: training a body part quality model according to the real sample data and the generated sample data, and detecting different body part qualities according to the trained body part quality model.
In practical application, step 101 specifically includes: in order to process the data of the automobile body parts, a data acquisition module of the automobile appearance is built, and real-time sensing and acquisition feedback of the automobile body parts are realized by utilizing mutual cooperation of computer equipment, an image acquisition card, an optical lens, a CCD camera and an LED lamp.
The data acquisition module mainly comprises a 2D visual detection device and a 3D inverse scanning device.
The 2D vision inspection device is configured to image a body part being inspected and generate a digital image of the body part, the body part inspection module being responsible for receiving the digital image of the body part and comparing with the template image to identify body part surface defects such as body part scratches, dents, oxidation, etc.; in addition, through carrying out geometric measurement and analysis on the two-dimensional image of the automobile body part, key dimension parameters and geometric characteristics such as diameter, length, angle, curvature and the like are extracted, and accurate control on the dimension specification of the automobile body part is realized.
The 3D inverse scanning device acquires three-dimensional point cloud data of the high-precision automobile body part through three-dimensional structure light scanning, captures comprehensive geometric information of the automobile body part through multiple scanning at different visual angles, and then utilizes a point cloud registration algorithm to fit and register the acquired point cloud data with a known CAD model so as to evaluate geometric precision and shape consistency of the automobile body part.
Further, the acquired two-dimensional image data is input into a 2D visual detection device, preprocessing operation is required to be carried out on the two-dimensional image in the two-dimensional image data in consideration of the fact that the acquired image is easily interfered by various factors such as noise, channels, A/D conversion errors, magnetic fields and the like in transmission, and smoothing and wavelet denoising processing is carried out on the two-dimensional image by utilizing Canny edge detection and Hough transformation, wherein the formula is shown in the formula (1):
(1)
wherein,the two-dimensional image is subjected to denoising treatment; />Is the standard deviation of the gaussian distribution; />A pixel point horizontal coordinate of the two-dimensional image data; />And the pixel point vertical coordinates of the two-dimensional image data.
The invention can effectively extract the edge area and key structure of the automobile body part in the image, and calculate the specification data of the automobile body part (namely the specification data of the part in figure 2) according to the scale map; on the basis, the two-dimensional image is further segmented, and the aim of efficiently extracting important features from the two-dimensional image is achieved by segmenting the complete two-dimensional image into a plurality of image areas. Then, by comparing the features of the extracted image sample with the features of the standard sample, the specific defect condition (namely, the surface geometric data in fig. 2) of the vehicle body part is defined.
In addition, the 3D inverse scanning device scans the automobile body part and generates corresponding three-dimensional point cloud data, and obtains internal geometric data of the automobile body part and texture information of each cavity (namely hole texture data in fig. 2) through the point cloud data, and the internal geometric data and the texture information of each cavity are stored as real sample data of the automobile body part and participate in a subsequent image/point cloud data enhancement process.
In practical applications, step 102 specifically includes: smoothing and wavelet denoising the two-dimensional image data; performing geometric transformation, color transformation and affine transformation on the two-dimensional image data subjected to denoising treatment to generate an image sample; generating an activation function of an countermeasure network by adopting the SeLU activation function improvement, and introducing a residual block to improve the generation of the countermeasure network; and utilizing the image samples to train the improved generator and the relative discriminator for generating the countermeasure network alternately, and determining the enhanced image samples.
In order to improve the detection and evaluation capability of the vehicle body part quality model on various vehicle body parts, the invention generates vehicle body part images under different visual angles and scales through geometric transformation such as rotation, translation, scaling and the like. Secondly, color transformations such as brightness, contrast and color adjustment are applied to generate image samples under different illumination and environmental conditions. In addition, affine transformation, perspective transformation and shape transformation are further introduced to simulate different deformation and posture changes of the automobile body part in consideration of deformation and posture changes possibly generated by the automobile body part. Meanwhile, by adding image processing technologies such as Gaussian noise, blurring and distortion, the robustness and the authenticity of the data are improved.
In addition, aiming at the problems of various sample types of automobile body parts and unbalanced sample data of different types, an improved generation countermeasure network two-dimensional image data enhancement algorithm (DCGAN) is provided, and the problems of unstable model, poor generated image quality and the like in the model training process are solved. The method mainly comprises the following improvements that 1) aiming at the problem that the generated body part image lacks detail information, on the basis of the traditional generation countermeasure network (Generative Adversarial Network, GAN), a SeLU activation function is adopted to replace the original ReLU activation function, so that richer image details are generated; 2) By adopting a method of relatively judging the loss function, generating stable and high-quality data samples by improving the probability that the forged data are judged to be real sample data; 3) And introducing a residual block into the generation reactance network, further improving the resolution of the image, and improving the quality of two-dimensional image samples of different types of vehicle body parts generated by the model.
Further, in order to further improve accuracy and reliability of the quality model of the vehicle body component, data enhancement processing is required to be performed by using the two-dimensional image data after denoising processing, and the method mainly comprises the following steps.
Step 2.1: and performing geometric transformation and color transformation on the denoised two-dimensional image data of the vehicle body part to generate image samples with high diversity and fidelity, and then simulating different deformation and posture changes of the vehicle body part through affine variation.
Step 2.2: the image sample generated in the step 2.1 is used for alternately training to generate a discriminator and a generator of the countermeasure network, and as shown in fig. 3, the generated image sample is more similar to real sample data through mutual competition of the two and alternate training, so that more diversified and high-quality image sample data are generated.
Step 2.3: in order to alleviate the gradient dispersion problem in the training of the generated countermeasure network, the invention further improves the activation function of the generated countermeasure network on the basis of the step 2.2, and replaces the common linear rectification unit (Rectifier Linear Unit, reLU) activation function with the Self-normalized neural network (Self-normalizing Neural Network, seLU) activation function.
Compared to a ReLU activation function that filters inputs less than 0, the SeLU activation function retains the computation of inputs less than 0, and the computation formula for providing a richer feature representation of the SeLU activation function for the generative model in GAN is shown in equation (2):
(2)
wherein,representing an input value of an activation function layer, namely, image features extracted from a bodywork sample; />Scaling factors representing the output of the activation function help to keep generating variances against the network layers output, avoiding the gradient from disappearing.A negative slope parameter representing the negative half-axis when +.>When providing a slope of +.>Is favorable for preserving and transmitting negative signals generated in an antagonism network, and further relieves the problem of gradient disappearance; wherein the negative half axis refers to the input value of the activation function mathematically for the real line or in this case +.>Description of the value ranges. In the activated function SeLU +.>>0 and-><=0 two segments, ++>Refers to +.>The slope of this segment.
However, since the SeLU activation function retains less than 0 input calculations, there is an increase in calculation time during the forward and backward propagation of the generative model in GAN. Therefore, to obtain more rich features while not increasing the amount of computation of the body part mass model, only the SeLU activation function is employed in the generator, and the arbiter still uses the ReLU activation function.
Step 2.4: because the score of the real sample data in the discriminator does not decrease with the increase of the score of the generated sample, the traditional generation counteracts the hidden trouble of model collapse. The hypothesis generator G and the discriminant D are trained to be optimal in each alternate training step, and at the end of the training, the discriminant,/>The method comprises the steps of carrying out a first treatment on the surface of the WhileThe generator is->. Wherein (1)>Representing real sample data +.>Representing the generated sample data generated by the generator, +.>Is the data about the real sample given by the arbiter +.>Score of->The discriminator is given about generating sample data +.>Is a score of (2).
In the course of the alternate training process,constant equal to 1, corresponding gradient constant equal to 0, and finally causing the discriminator to pause updating so as to collapse the model. Therefore, in the conventional generation countermeasure network, when the discriminant training is preferable, the true sample data is not considered in calculating the gradient, but only how to make the generated sample data more true is paid attention to, which may make the training unstable. If->Gradually decrease (I) of>Gradually increasing, the real sample data can be used to generate a calculation against gradients in the network, thereby avoiding trainingGradient vanishing phenomenon.
For this purpose, the invention adopts the generation of the opposing network of the relative discriminant while improving the activation function in step 2.3, and enables the generation of the opposing network during the training processTo->While moving, ->Also to->Remove, finally->And (3) withThe consistency enables the discriminator to converge better, and the formulas are shown as formula (3) and formula (4):
(3)
(4)
wherein,for the Loss function of the arbiter for evaluating the arbiter's ability to judge the true sample data or to generate samples, +.>Indicating the score expectations of the arbiter for the real sample data, < ->Representing real sample data +.>True sample data distribution obeyed, +.>Representing the discriminator versus real sample data>Judging score of whether the sample data is true sample data, +.>Indicating the score expectations of the arbiter for generating samples, < +.>Representing the pair of discriminants to generate samples->Judging whether the sample is generated, and +.>Representation of the generated samples->Generating a sample distribution, which is obeyed,>as a difference function, the difference is smoothed to more effectively perform gradient descent. By maximizing +.>To ensure that the arbiter correctly identifies the true sample data/generated samples.
The Loss function is used as a generator and used for evaluating the approximation degree of the generated sample and the real sample data of the trial generator; />Representing the score expectations of the generator for the real sample data, < +.>Representation generator +.>Judging score of whether the sample data is true sample data, +.>Indicating the score expectations of the arbiter for generating samples, < +.>Representation generator pair generates samples->Whether it is a decision score to generate a sample,and->Similarly, the difference is smoothed. By minimizing +.>To ensure that the generator generates samples that are close to the real sample data.
In a conventional generation of an antagonism network,because the conventional generation of the generator in the countermeasure network only needs to be consideredThe value of (2) is as high as possible without taking into account +.>Is a value of (2). But in the improved generation countermeasure network, p ∈>The value of (2) is taken into account, i.e.)>. To ensure->Gradually decrease (I) of>Gradually increasing, introducing a concept of a relative discriminator, taking a discrimination value of the real sample data minus a discrimination value of the generated sample as a measurement standard, and changing a target formula into:
(5)
(6)
wherein,for encouraging the arbiter to increase the true sample data +.>Is reduced to generate sample data +.>Is a score of (2); p is the true sample data->True sample data distribution obeyed; q is the generation sample data->Generating a sample data distribution; />Is a first difference function that helps to distinguish between true sample data and to generate a scoring difference for the sample data; />Is the data about the real sample given by the arbiter +.>Is a score of (2); />The discriminator is given about generating sample data +.>Is a score of (2);balance items set for balance training to ensure that the arbiter does not excessively lean to the real sample data and generate the sample data; />And->Similarly, a second difference function representing the difference in scores of the discriminators for the real sample data and generating the sample data; />By setting two different difference functions +.>To improve the stability of GAN model training.
A Loss of Loss function for the generator; />For measuring the +.>For real sample data->Relative to the generation of sample data->Outputting the expected value of the difference; />As a third difference function, smoothing the difference to more effectively perform gradient descent;for measuring the discriminator->For generating sample data->Relative to real sample dataThe expected value of the output difference is the second part of the loss function, used for measuring the discriminator +.>For generating sample data->Relative to real sample data->Outputting the expected value of the difference; />The third difference function, which is a process of differentiating the output of the arbiter, for distinguishing between the real sample data and generating the scoring differences of the sample data, is also the inverse of this time, i.eEncouraging the generator to generate sample data +.>Can "fool" the arbiter more, making the arbiter more likelyErroneously recognizing it as true sample data +.>
Ensuring in a generatorThe smaller the better the corresponding generation of a guide +.>Gradient moving in a smaller direction. Thus, in the training process +.>Gradually decrease to and->And consistent.
Step 2.5: in order to further improve the image resolution, the invention introduces a residual block on the basis of step 2.3 to improve the network structure of the existing generation countermeasure network, takes the generation of a picture with the size of 48 pixels multiplied by 48 pixels as an example, determines a four-dimensional tensor of the generation countermeasure network, and takes the number of batch processing samples as 64, the input noise dimension z as 100, the initial noise sample pixels as 1 multiplied by 1 and the initial sample dimension as [64, 100,1,1]. By adding a residual network consisting of two residual structures into the generated reactance network, the output dimension is kept unchanged, and finally a 48pixel x 48pixel three-channel picture is output. And then, introducing the feature map obtained after a plurality of transposition convolution operations into a residual network to further extract details of the feature map, and ensuring that the image features of an upper network are detailed and enriched as much as possible before entering a next transposition convolution layer, thereby avoiding the loss of image feature information.
Through the steps, an accurate and reliable vehicle body part detection model can be trained, and further vehicle body part model samples with high diversity and fidelity are generated, so that an original vehicle body part sample set is expanded, and the detection and evaluation capacity of a vehicle body part quality model on various vehicle body parts is improved.
According to the invention, the detail quality of the generated image sample can be effectively improved through a two-dimensional image data enhancement mode, the model training is stabilized, and the evaluating generalization capability of the automobile body part quality model on various automobile body parts is improved.
In practical applications, step 103 specifically includes: resampling and translational transformation are carried out on the three-dimensional point cloud data; performing rotation and scaling operation on the resampled and translational transformed point cloud data to generate point cloud samples under different viewing angles and scales; performing up-sampling operation on the point cloud sample through a PU-Net algorithm to obtain point cloud characteristics with 3D detail representation; expanding the feature space of the point cloud features based on the effective feature expansion operation of the sub-pixel convolution layer, and generating expanded point cloud features; and performing dimension reduction operation on the expanded point cloud characteristics to generate a dense point cloud sample.
In practical application, in the aspect of three-dimensional point cloud data enhancement, the collected three-dimensional point cloud data is resampled and translated by adopting random sampling and point cloud translation operation so as to increase the density and spatial variation of the data. Secondly, through rotation and scaling operations, point cloud samples under different angles and scales are generated so as to increase the diversity and coverage range of data.
In addition, due to the characteristics of sparsity, irregularity and the like of the three-dimensional point cloud data, the invention adopts a point cloud up-sampling network (PU-Net) based on data driving, learns multi-level characteristics of each point, implicitly expands a point set through a multi-branch convolution unit in a characteristic space, and the expanded characteristics are further divided into a plurality of characteristics and then reconstruct into an up-sampling point set. By the method, the density and the quality of the new point cloud sample after a series of operations such as random sampling, point cloud translation and the like can be effectively improved, and training of a vehicle body part quality model is facilitated.
Furthermore, for the collected three-dimensional point cloud data, the three-dimensional point cloud data comprise internal geometric data and cavity texture data of the vehicle body part, and the point cloud data enhancement module based on the point cloud is designed and mainly comprises the following steps of random sampling, translation transformation, point cloud up-sampling based on Pu-Net and the like.
Step 3.1: for the collected three-dimensional point cloud data, firstly, adopting random sampling and point cloud translation operation to resample and translate the three-dimensional point cloud data so as to increase the density and the spatial variation of the data.
Step 3.2: and (3) performing rotation and scaling operation on the resampled and translational transformed point cloud data on the basis of the step (3.1) to generate point cloud samples under different viewing angles and scales.
Step 3.3: and 3, utilizing the point cloud sample generated in the step 3.2 to realize point cloud up-sampling through a PU-Net algorithm, and obtaining richer 3D detail representation.
Firstly, randomly selecting M central points on the surface of a target object, generating a patch for each central point, limiting the geodesic distance between any point in the patch and the central point to be within a set distance d, and sampling N points in each patch through poisson disk sampling to serve as real point distribution on a small block.
In the upsampling task, local and global information are used together for smooth and unified output. Thus, the set distance d is set with different dimensions so that points of different proportions and densities can be extracted on the previous object. And then, the collected three-dimensional point cloud data passes through a feature extractor to extract the point-wise feature of the three-dimensional point cloud data, so that preparation is made for the subsequent up-sampling process.
In order to better extract point-wise characteristics of point clouds at different levels, the invention refers to an interpolation method of PointNet++, and the characteristics of all original points are recovered from up-sampling of the down-sampled point characteristics, so that the point characteristics at different levels are connected, and a specific calculation formula is shown as a formula (7).
(7)
Wherein,representing point coordinates in three-dimensional point cloud data, +.>Indicate->Layer Point->Is used for the interpolation feature of (a),is indicated at +.>Layer Point->Is>Features of individual neighbor points->Indicate->Individual neighbor point-to-point->And L is the total number of layers.
Step 3.4: in order to further increase the point cloud density, the feature quantity of the feature space in which the point-wise feature extracted in step 3.3 is located needs to be further expanded.
In the traditional 2D image field, deconvolution and interpolation are typically employed to achieve upsampling-like effects, however, it is not easy to apply these operations to the point cloud due to the irregular and disordered nature of the points.
To this end, the present invention proposes an efficient feature extension operation based on a subpixel convolutional layer, assuming the input isDimension is->Wherein, the method comprises the steps of, wherein,/>representing the number of input points +.>Representing the dimension of the post-splice-wise feature, the post-expansion output is +.>Dimension is->,/>For up-sampling rate, +.>Is the point-wise feature of the output.
The expansion formula of the feature space is shown in formula (8).
(8)
Wherein,is a resampling strategy of point cloud features for increasing point cloud density, < >>An output representation representing a first layer convolution, +.>Representing convolution of the first layer->Again applying a second layer convolution to extract higher level point cloud features, r=1, 2,3 representing the up-sampling rate of the point cloud data.
The up-sampling process is shown in fig. 4 and includes block feature extraction, point feature embedding, feature expansion, and point cloud reconstruction.
By combiningReplication->Portions, then respectively send into->A different convolution operation->In (c) respectively into a second layer convolution +.>Is a kind of medium.
From a first layer convolutionThe resulting features have a high correlation with the input features, i.e., the resulting results are very similar, which can result in the final resulting location being very close. Therefore add a layer of convolution +.>The thus generated features can represent more diversified point cloud data. Finally, by a series->And the convolution reduces the feature dimension of the point-wise to 3 to obtain the reconstructed point cloud data.
The formula (7) is to interpolate and sample the point cloud point-wise characteristics, and the formula (8) is to extend the interpolated characteristics into a hierarchical network.
In the PointNet++ architecture, the network does not just process point cloud data on a single scale, but on multiple scales, allowing the network to capture different levels of structural information from local to global. The focus of step 3.4 is how to combine the features calculated in step 3.3Integration into higher level network structures. Feature vector->Is obtained by resampling and transformation to adapt to the next layer of the network. Formula (8) shows this process, wherein +.>Is a new feature vector, which is derived from +.>Export, by reshape operations such as convolution, copy splicing, etc.
Step 3.5: and (3) combining the enhanced image sample obtained in the step (102) with the reconstructed point cloud sample (namely the dense point cloud sample) obtained in the step (3.4) to obtain generated sample data of the automobile body part, and taking the generated sample data and the real sample data collected in the step (101) together as a training set to train the quality model of the automobile body part, so that the problem of data unbalance caused by different automobile body parts is effectively solved, and the detection precision and generalization capability of the quality model of the automobile body part are improved.
According to the invention, the three-dimensional point cloud data enhancement mode is adopted, the multi-branch convolution is utilized to expand the point set into a plurality of implicit features and reconstruct the point set into the up-sampling point set with more abundant details and more types, so that the more accurate and complete three-dimensional point cloud data of the automobile body part can be obtained.
Example two
In order to execute the corresponding method of the above embodiment to achieve the corresponding functions and technical effects, a system for generating data enhancement of a vehicle body part quality model is provided below.
A generated data enhancement system for a body component quality model, comprising:
the data acquisition module is used for acquiring two-dimensional image data and three-dimensional point cloud data of different automobile body parts of the automobile and generating real sample data.
And the image data enhancement module is used for carrying out data enhancement processing on the two-dimensional image data to generate an enhanced image sample.
And the point cloud data enhancement module is used for carrying out data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample.
And the generated sample data determining module is used for determining generated sample data of the vehicle body part according to the enhanced image sample and the dense point cloud sample.
The training and detecting module is used for training the body part quality model according to the real sample data and the generated sample data and detecting different body part qualities according to the trained body part quality model.
Example III
An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the method of generating data enhancement for a body part quality model described above.
A computer readable storage medium storing a computer program which when executed by a processor implements the method of generating data enhancement for a body part quality model described above.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (10)

1. A method for generating data enhancement of a body part quality model, comprising:
acquiring two-dimensional image data and three-dimensional point cloud data of different body parts of an automobile, and generating real sample data;
performing data enhancement processing on the two-dimensional image data to generate an enhanced image sample;
performing data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample;
determining generated sample data of the vehicle body part according to the enhanced image sample and the dense point cloud sample;
training a body part quality model according to the real sample data and the generated sample data, and detecting different body part qualities according to the trained body part quality model.
2. The method for generating a data enhancement for a vehicle body part quality model according to claim 1, wherein the data enhancement processing is performed on the two-dimensional image data to generate an enhanced image sample, specifically comprising:
smoothing and wavelet denoising the two-dimensional image data;
performing geometric transformation, color transformation and affine transformation on the two-dimensional image data subjected to denoising treatment to generate an image sample;
generating an activation function of an countermeasure network by adopting the SeLU activation function improvement, and introducing a residual block to improve the generation of the countermeasure network;
and utilizing the image samples to train the improved generator and the relative discriminator for generating the countermeasure network alternately, and determining the enhanced image samples.
3. The method for generating data enhancement of a vehicle body part quality model according to claim 2, wherein smoothing and wavelet denoising the two-dimensional image data specifically comprises:
using the formulaSmoothing and wavelet denoising the two-dimensional image data; wherein (1)>The two-dimensional image is subjected to denoising treatment; />Is the standard deviation of the gaussian distribution; />A pixel point horizontal coordinate of the two-dimensional image data; />And the pixel point vertical coordinates of the two-dimensional image data.
4. The method for generating data enhancement for a body part quality model according to claim 2, wherein the SeLU activation function is:
wherein,scaling factors output for the SeLU activation function; />Input values for the activation function layer; />Is a negative slope parameter of the negative half-axis.
5. The method for generating data enhancement of a body part quality model according to claim 2, wherein the objective of the improved generation of the countermeasure network is:
wherein,a Loss of Loss function for the relative arbiter; />For encouraging the arbiter to increase the true sample data +.>Is reduced to generate sample data +.>Is a score of (2); p is the true sample data->True sample data distribution obeyed; q is the generation sample data->Generating a sample data distribution;a first difference function for distinguishing between real sample data and generating a scoring difference for the sample data;data about real samples given for the arbiter +.>Is a score of (2); />The discriminator is given about generating sample data +.>Is a score of (2); />Is a balance term; />A second difference function for distinguishing between the real sample data and generating a scoring difference for the sample data; />A Loss of Loss function for the generator; />For measuring the discriminator->For real sample data->Relative to the generation of sample data->Outputting the expected value of the difference; />A third difference function for distinguishing between the real sample data and generating a scoring difference for the sample data; />For measuring the discriminator->For generating sample data->Relative to real sample data->Outputting the expected value of the difference; />A third difference function for distinguishing between the real sample data and generating a scoring difference for the sample data.
6. The method for generating data enhancement of a vehicle body part quality model according to claim 1, wherein the data enhancement processing is performed on the three-dimensional point cloud data to generate a dense point cloud sample, specifically comprising:
resampling and translational transformation are carried out on the three-dimensional point cloud data;
performing rotation and scaling operation on the resampled and translational transformed point cloud data to generate point cloud samples under different viewing angles and scales;
performing up-sampling operation on the point cloud sample through a PU-Net algorithm to obtain point cloud characteristics with 3D detail representation;
expanding the feature space of the point cloud features based on the effective feature expansion operation of the sub-pixel convolution layer, and generating expanded point cloud features;
and performing dimension reduction operation on the expanded point cloud characteristics to generate a dense point cloud sample.
7. The method for generating data enhancement of a vehicle body part quality model according to claim 6, wherein expanding the feature space of the point cloud feature based on the effective feature expansion operation of the sub-pixel convolution layer, generating the expanded point cloud feature, specifically comprises:
using the formulaExpanding the feature space of the point cloud features to generate expanded point cloud features; wherein (1)>The expanded point cloud characteristics; />Resampling strategies for point cloud features; />Characterizing the output of the first layer convolution; />For convolving the first layer->Again applying a second layer convolution to the output representation of (2); r is the up-sampling rate of the point cloud data, r=1, 2,3.
8. A system for generating a data enhancement for a quality model of a vehicle body component, comprising:
the data acquisition module is used for acquiring two-dimensional image data and three-dimensional point cloud data of different automobile body parts of the automobile and generating real sample data;
the image data enhancement module is used for carrying out data enhancement processing on the two-dimensional image data to generate an enhanced image sample;
the point cloud data enhancement module is used for carrying out data enhancement processing on the three-dimensional point cloud data to generate a dense point cloud sample;
a generated sample data determining module for determining generated sample data of a vehicle body component according to the enhanced image sample and the dense point cloud sample;
the training and detecting module is used for training the body part quality model according to the real sample data and the generated sample data and detecting different body part qualities according to the trained body part quality model.
9. An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the method of generating data enhancement for a body part quality model according to any one of claims 1-7.
10. The electronic device of claim 9, wherein the memory is a non-transitory computer readable storage medium storing a computer program that when executed by a processor implements the method of generating data enhancement of a body part quality model of any one of claims 1-7.
CN202410000457.6A 2024-01-02 2024-01-02 Method, system and equipment for enhancing generated data of vehicle body part quality model Pending CN117495712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410000457.6A CN117495712A (en) 2024-01-02 2024-01-02 Method, system and equipment for enhancing generated data of vehicle body part quality model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410000457.6A CN117495712A (en) 2024-01-02 2024-01-02 Method, system and equipment for enhancing generated data of vehicle body part quality model

Publications (1)

Publication Number Publication Date
CN117495712A true CN117495712A (en) 2024-02-02

Family

ID=89674921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410000457.6A Pending CN117495712A (en) 2024-01-02 2024-01-02 Method, system and equipment for enhancing generated data of vehicle body part quality model

Country Status (1)

Country Link
CN (1) CN117495712A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767391A (en) * 2021-02-25 2021-05-07 国网福建省电力有限公司 Power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional image
CN116935174A (en) * 2023-06-07 2023-10-24 上海应用技术大学 Multi-mode fusion method and system for detecting surface defects of metal workpiece
CN117252815A (en) * 2023-08-29 2023-12-19 上海大学 Industrial part defect detection method, system, equipment and storage medium based on 2D-3D multi-mode image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767391A (en) * 2021-02-25 2021-05-07 国网福建省电力有限公司 Power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional image
CN116935174A (en) * 2023-06-07 2023-10-24 上海应用技术大学 Multi-mode fusion method and system for detecting surface defects of metal workpiece
CN117252815A (en) * 2023-08-29 2023-12-19 上海大学 Industrial part defect detection method, system, equipment and storage medium based on 2D-3D multi-mode image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LEQUAN YU ET AL.: "PU-Net: Point Cloud Upsampling Network", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, 16 December 2018 (2018-12-16), pages 2791 - 2793 *
杨诗婧 等: "基于视觉测量技术的汽车外观缺陷视觉检测方法研究", 基于视觉测量技术的汽车外观缺陷视觉检测方法研究, 25 June 2023 (2023-06-25), pages 69 - 71 *
郭伟 等: "改进生成式对抗网络的图像数据集增强算法", 电讯技术, 17 September 2021 (2021-09-17), pages 2 - 5 *

Similar Documents

Publication Publication Date Title
CN113450307B (en) Product edge defect detection method
CN109580630B (en) Visual inspection method for defects of mechanical parts
CN113178009B (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN107301661A (en) High-resolution remote sensing image method for registering based on edge point feature
CN112329588B (en) Pipeline fault detection method based on Faster R-CNN
CN111553858B (en) Image restoration method and system based on generation countermeasure network and application thereof
CN114550021B (en) Surface defect detection method and device based on feature fusion
CN115797354B (en) Method for detecting appearance defects of laser welding seam
CN109470149A (en) A kind of measurement method and device of pipeline pose
CN112183325B (en) Road vehicle detection method based on image comparison
CN111951292B (en) Object surface reflection attribute extraction method, device, equipment and storage medium
CN113393439A (en) Forging defect detection method based on deep learning
CN110458812A (en) A kind of similar round fruit defects detection method based on color description and sparse expression
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN115482195A (en) Train part deformation detection method based on three-dimensional point cloud
CN116279592A (en) Method for dividing travelable area of unmanned logistics vehicle
CN114399505B (en) Detection method and detection device in industrial detection
CN116958645A (en) Scratch identification method for full-automatic integrated automobile paint repair
CN117495712A (en) Method, system and equipment for enhancing generated data of vehicle body part quality model
CN114219841B (en) Automatic lubricating oil tank parameter identification method based on image processing
Schnabel et al. Shape detection in point clouds
CN115115860A (en) Image feature point detection matching network based on deep learning
CN116523909B (en) Visual detection method and system for appearance of automobile body
CN117197215B (en) Robust extraction method for multi-vision round hole features based on five-eye camera system
CN117095033B (en) Multi-mode point cloud registration method based on image and geometric information guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination