CN115620072A - Patch element classification method based on fusion features and hybrid neural network - Google Patents

Patch element classification method based on fusion features and hybrid neural network Download PDF

Info

Publication number
CN115620072A
CN115620072A CN202211392078.3A CN202211392078A CN115620072A CN 115620072 A CN115620072 A CN 115620072A CN 202211392078 A CN202211392078 A CN 202211392078A CN 115620072 A CN115620072 A CN 115620072A
Authority
CN
China
Prior art keywords
picture
neural network
patch element
hybrid neural
binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211392078.3A
Other languages
Chinese (zh)
Inventor
高会军
刘伟华
杨宪强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202211392078.3A priority Critical patent/CN115620072A/en
Publication of CN115620072A publication Critical patent/CN115620072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

A patch element classification method based on fusion characteristics and a hybrid neural network aims to solve the problems of low classification efficiency and high error rate of patch elements in surface mounting equipment during automatic classification, collects a patch element picture and performs data enhancement on the patch element picture; preprocessing the data-enhanced picture of the patch element; carrying out binarization processing on the preprocessed patch element picture, and extracting the characteristics of the binary picture; constructing a hybrid neural network model, inputting the characteristics of the picture and the binary picture obtained by preprocessing into the hybrid neural network model for training, optimizing parameters of the hybrid neural network model by using a gradient descent method, and outputting the class probability of the patch element; and repeatedly executing the operation on the patch element pictures to be classified to obtain the category probability of the patch elements. Belongs to the field of industrial automation.

Description

Patch element classification method based on fusion features and hybrid neural network
Technical Field
The invention relates to a classification method, and belongs to the field of industrial automation.
Background
With the continuous development of scientific technology, surface mounting equipment has been widely used for identifying and mounting a surface-mounted electronic element by using an image processing system with high speed, high precision and strong stability. In the bulk material identification and mounting function of the surface mounting equipment, the shapes, sizes and surface textures of different types of patch electronic elements are different, and the patch electronic elements to be mounted need to be identified and classified, but the existing image processing system has the problems of low classification efficiency and high error rate when the patch electronic elements are automatically classified.
Disclosure of Invention
The invention aims to solve the problems of low classification efficiency and high error rate when the surface mounting device is automatically classified, further provides a patch element classification method based on fusion features and a hybrid neural network.
The technical scheme adopted by the invention is as follows:
the method comprises the following steps:
s1, collecting a patch element picture, and performing data enhancement on the patch element picture;
s2, preprocessing the patch element picture after data enhancement, wherein the specific process is as follows:
s21, respectively translating a certain image subjected to data enhancement by 5 pixels upwards, downwards, leftwards, rightwards, leftwards upwards, rightwards downwards, rightwards upwards and leftwards downwards to obtain eight translated gray level images, and respectively differencing the eight gray level images with an original image to obtain eight gray level difference images;
s22, calculating a threshold value T for extracting FAST feature points according to the eight gray difference value graphs, and extracting FAST feature points on the patch element pictures corresponding to a certain picture according to the threshold value T;
s23, calculating a minimum enclosing rectangle formed by all FAST characteristic points on the patch element picture in the S22, and respectively expanding the length and the width of the rectangle by 20% to obtain a rectangle I;
s24, zooming the repeated region of the patch element picture and the rectangle I in the S22 by using a linear interpolation method to obtain a preprocessed picture d, wherein the length and the width of the picture d are both 64 pixels;
s3, carrying out binarization processing on the picture d, and extracting the characteristics of the binary picture;
s4, constructing a hybrid neural network model, inputting the characteristics of the picture d obtained by preprocessing in the S2 and the binary picture obtained in the S3 into the hybrid neural network model for training, optimizing parameters of the hybrid neural network model by using a gradient descent method, and outputting the class probability of the patch element to obtain the trained hybrid neural network model;
and S5, sequentially preprocessing and binarizing the picture of the patch element to be classified, extracting the features of the binary picture, and inputting the preprocessed picture and the extracted features of the binary picture into a trained hybrid neural network model to obtain the class probability of the patch element.
Further, gather the patch element picture in S1, carry out data enhancement to the patch element picture, specific process is:
s11, collecting a plurality of patch element pictures, and rotating each patch element picture by angles of-30 degrees, -22 degrees, -15 degrees, -5 degrees, 15 degrees, 22 degrees and 30 degrees to obtain eight rotated pictures a;
s12, gaussian noise with the mean value of 0 and the variances of 0.001, 0.05 and 0.01 is added to each picture a, and three added pictures b are obtained;
s13, establishing a coordinate system by using the central point of each picture b, turning each picture b along the X axis and the Y axis respectively by using the right side of the central point as the X axis and the lower part of the central point as the Y axis to obtain two turned pictures c;
and S14, merging all the pictures c and all the pictures b obtained after overturning to serve as a picture data set.
Further, the threshold T in S22:
Figure BDA0003931819890000021
wherein, T 1 、T 2 、T 3 、T 4 、T 5 、T 6 、T 7 、T 8 Maximum between-class variance of the eight gray difference maps respectively; m is 1 、m 2 Is a weight, and satisfies m 1 +m 2 =1。
Further, in S3, a binarization process is performed on the picture d, and features of the binary picture are extracted, and the specific process is as follows:
s31, carrying out binarization processing on each picture d by using a maximum inter-class variance method to obtain a binary picture;
s32, performing region screening on the binary image to obtain an effective region on the binary image;
s33, calculating the total number of white pixels in the effective area on the binary image and the minimum circumscribed rectangle area formed by all the white pixels, traversing each pixel position of the effective area on the binary image, and marking the pixel position as 1 if the pixel position is a white pixel, otherwise, marking the pixel position as 0;
s34, respectively calculating the proportion of the total number of white pixels in the effective region to the total number of pixels in the binary picture, the proportion of the area of the minimum circumscribed rectangle to the area of the binary picture, and the proportion of the total number of white pixels to the area of the minimum circumscribed rectangle;
and S35, extracting the features of the binary picture according to the S33 and the S34.
Further, in S32, performing region screening on the binary image to obtain an effective region on the binary image, where the specific process is as follows:
dividing the binary picture into a plurality of regions by using a Blob method, extracting information of each region, calculating the area of a white pixel in each region, eliminating the region with the area of the white pixel smaller than 8 to obtain a plurality of regions a, calculating the roundness and the rectangularity of each region a, customizing the range values of the roundness and the rectangularity, eliminating the regions with the roundness and the rectangularity lower than the range values to obtain a plurality of regions b, and taking the plurality of regions b as effective regions of the binary picture.
Further, the calculation formula of the roundness is as follows:
Figure BDA0003931819890000031
where P is the roundness of each region b, S is the number of white pixels in each region b, and is also the region area, and L is the contour perimeter of the white pixel region.
Further, the calculation formula of the squareness degree is as follows:
Figure BDA0003931819890000032
wherein R is the squareness of each region b, S max Is the area of the smallest circumscribed rectangle made up of white pixels.
Further, in S35, the features of the binary image are extracted according to S33 and S34, and the specific process is as follows:
V=[α,β,γ,v 1 ,v 2 ,…,v i ,…,v 4096 ] (4)
v is the characteristic of the binary picture, alpha is the proportion of the total number of white pixels to the total number of pixels in the binary picture, beta is the proportion of the area of the minimum circumscribed rectangle to the area of the binary picture, and gamma is the proportion of the total number of white pixels to the area of the minimum circumscribed rectangle; v. of i I =1,2, \8230;, 4096 for the ith pixel position in the binary picture.
Further, the hybrid neural network model in the S4 comprises a convolutional neural network, a fully connected neural network and a plurality of SVM (support vector machines), wherein the convolutional neural network and the fully connected neural network are arranged in parallel and are connected with the plurality of SVM support vector machines;
the convolutional neural network sequentially comprises 1 convolutional layer, 1 pooling layer, 1 convolutional layer, 1 pooling layer and 1 full-connection layer, and ReLU is used as an activation unit;
the fully-connected neural network sequentially comprises 3 fully-connected layers, and the node number of each of the 3 fully-connected layers is 500, 500 and 128.
Further, a hybrid neural network model is constructed in S4, features of the picture d obtained through preprocessing in S2 and the binary picture obtained through preprocessing in S3 are input into the hybrid neural network model for training, parameters of the hybrid neural network model are optimized through a gradient descent method, the class probability of the patch element is output, and the trained hybrid neural network model is obtained, and the specific process is as follows:
inputting the picture d obtained by the preprocessing in the S2 into a convolutional neural network, and outputting a characteristic value of the picture d; meanwhile, inputting the characteristics of the binary image obtained in the step S3 into the fully-connected neural network, and outputting the characteristic value of the binary image; inputting the characteristic value of the picture d and the characteristic value of the binary picture into each SVM support vector machine to obtain the confidence score of each patch element category, calibrating the confidence scores by using logistic regression, outputting the probability of the category to which the patch element belongs, sequencing the probabilities output by the plurality of SVM support vector machines, and taking the category with the highest probability as the category to which the patch element belongs, namely taking the output of the hybrid neural network model as the category probability of the patch element.
Has the advantages that:
the invention enhances the data of the collected patch element picture, and expands the images of all angles of the picture while carrying out primary processing on the picture, thereby enhancing the application range of the neural network. Preprocessing the picture, calculating a threshold value for extracting FAST feature points by using the maximum inter-class variance of the gray difference image, extracting FAST feature points on the patch element picture according to the threshold value, calculating a minimum bounding rectangle consisting of all FAST feature points, respectively expanding the length and width of the rectangle by 20% to obtain an expanded rectangle I, zooming the repeated region of the patch element picture and the rectangle I in S22 by using a linear interpolation method to obtain a preprocessed picture, wherein the length and width of the picture are both 64 pixels, and the influence of noise and other interference on the picture is reduced by preprocessing. Performing binarization processing on the picture, performing region screening on the binary picture, extracting information of each region by using a Blob method, calculating the area of a white pixel in each region, eliminating the regions with the area of the white pixel smaller than 8, calculating the roundness and the rectangular degree of the remaining regions, customizing the range values of the roundness and the rectangular degree, eliminating the regions with the roundness and the rectangular degree lower than the range values to obtain effective regions, calculating the total number of the white pixels in the effective regions and the minimum circumscribed rectangular area formed by all the white pixels, traversing each pixel position of the effective regions on the binary picture, if the pixel position is the white pixel, marking the pixel position as 1, otherwise, marking the pixel position as 0; respectively calculating the proportion of the total number of white pixels in the effective region to the total number of pixels in the binary picture, the proportion of the area of the minimum circumscribed rectangle to the area of the binary picture, and the proportion of the total number of white pixels to the area of the minimum circumscribed rectangle; and extracting the characteristics of the binary image according to the information. And constructing a hybrid neural network model, inputting the characteristics of the picture and the binary picture obtained by preprocessing into the hybrid neural network model, and outputting the type probability of the patch element, wherein the hybrid neural network model comprises a convolutional neural network, a fully-connected neural network and a plurality of SVM (support vector machines), and the convolutional neural network and the fully-connected neural network are arranged in parallel and are connected with the SVM support vector machines. The invention uses the mixed matching of the artificial characteristics and the neural network extraction characteristics, has high speed, strong applicability, strong operability, high classification accuracy and low error rate, and is suitable for the classification of various patch elements.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram illustrating the result of image preprocessing;
Detailed Description
The first specific implementation way is as follows: the embodiment is described with reference to fig. 1 to fig. 2, and the method for classifying patch elements based on fusion features and a hybrid neural network according to the embodiment includes the following steps:
s1, collecting a picture of the patch element, and performing data enhancement on the picture of the patch element, wherein the specific process is as follows:
s11, collecting a plurality of patch element pictures, and rotating each patch element picture by angles of-30 degrees, -22 degrees, -15 degrees, -5 degrees, 15 degrees, 22 degrees and 30 degrees to obtain eight rotated pictures a.
The method comprises the steps of collecting pictures of the patch elements by using surface mounting equipment, taking 3000-4000 pictures, and relating to 5 kinds of patch elements. After angle rotation, gaussian noise addition and image turning operation, the number of the images is enlarged to 13 times of the original number.
And S12, gaussian noise with the mean value of 0 and the variances of 0.001, 0.05 and 0.01 is added to each picture a, and three added pictures b are obtained.
And S13, establishing a coordinate system by using the central point of each picture b, turning each picture b along the X axis and the Y axis respectively by using the right side of the central point as the X axis and the lower side of the central point as the Y axis, and obtaining two turned pictures c.
And S14, merging all the pictures c and all the pictures b obtained after overturning to serve as a picture data set.
S2, preprocessing the patch element picture after data enhancement, wherein the specific process is as follows:
s21, respectively translating a certain picture in the picture data set by 5 pixels upwards, downwards, leftwards, rightwards, leftwards, upwards, rightwards, upwards rightwards and downwards to obtain eight translated gray level images, and respectively differencing the eight gray level images with the original image to obtain eight gray level difference images.
S22, calculating a threshold value T for extracting FAST characteristic points according to the eight gray difference images, and extracting FAST characteristic points on the patch element images corresponding to a certain image according to the threshold value T;
Figure BDA0003931819890000051
wherein, T 1 、T 2 、T 3 、T 4 、T 5 、T 6 、T 7 、T 8 The maximum inter-class variance of the eight gray difference images is the same as the translation sequence; m is 1 、m 2 Are weights and satisfy m 1 +m 2 =1, take m 1 =0.6,m 2 =0.4。
And S23, calculating a minimum enclosing rectangle formed by all FAST characteristic points on the patch element picture in the S22, and respectively expanding the length and the width of the rectangle by 20% to obtain an expanded rectangle I.
And S24, zooming the repeated region of the patch element picture and the rectangle I in the S22 by using a linear interpolation method to obtain a preprocessed picture d, wherein the length and the width of the picture d are both 64 pixels.
And extracting a region corresponding to the enlarged image in the rectangle I on each poster element picture as a cut picture of the paster element picture, zooming the cut picture into the rectangle picture I with the width of 64 pixels and the length of 64 pixels by using a linear interpolation method, and finishing the preprocessing of the picture. The pretreatment process is schematically shown in FIG. 2.
And S3, carrying out binarization processing on the preprocessed patch element picture, and extracting the characteristics of the binary picture.
And S31, carrying out binarization processing on each picture d by using a maximum inter-class variance method to obtain a binary picture.
S32, carrying out region screening on the binary image to obtain an effective region on the binary image, wherein the specific process is as follows:
dividing the binary picture into a plurality of regions by using a Blob method, extracting information of each region, calculating the area of a white pixel in each region, and eliminating the regions with the area of the white pixel smaller than 8 to obtain a plurality of remaining regions a, wherein the regions are interference regions. And calculating the roundness and the rectangularity of each region a, customizing the range values of the roundness and the rectangularity, regarding the regions with the roundness and the rectangularity lower than the range values as special-shaped regions, and removing the special-shaped regions to obtain a plurality of remaining regions b as effective regions of the binary image.
The calculation formula of the roundness is as follows:
Figure BDA0003931819890000061
where P is the roundness of each region b, S is the number of white pixels in each region b, and is also the region area, and L is the contour perimeter of the white pixel region.
The calculation formula of the squareness degree is as follows:
Figure BDA0003931819890000062
wherein R is the squareness, S max Is the area of the minimum circumscribed rectangle made up of white pixels.
And S33, calculating the total number of white pixels in all the regions b on the binary picture and the minimum circumscribed rectangle area formed by all the white pixels, traversing each pixel position of each region b on the binary picture, and marking the pixel position as 1 if the pixel position is a white pixel, otherwise, marking the pixel position as 0.
And S34, respectively calculating the proportion of the total number of white pixels in all the regions b to the total number of pixels in the binary picture, the proportion of the minimum circumscribed rectangle area formed by all the white pixels in all the regions b to the area of the binary picture, and the proportion of the total number of white pixels in all the regions b to the area of the minimum circumscribed rectangle. Ratio of total white pixel area to minimum circumscribed rectangle area: the area of each white pixel can be seen as 1 and all white pixels are added together to get the total white pixel area.
S35, extracting the features of the binary image according to the S33 and the S34, wherein the specific process is as follows:
V=[α,β,γ,v 1 ,v 2 ,…,v i ,…,v 4096 ] (4)
wherein, V is the characteristic of the binary picture, alpha is the proportion of the total number of white pixels to the total number of pixels in the binary picture, beta is the proportion of the area of the minimum circumscribed rectangle to the area of the binary picture, and gamma is the proportion of the total number of white pixels to the area of the minimum circumscribed rectangle; v. of i I =1,2, \8230;, 4096, the picture size being the total number of pixels contained in the picture, which is the ith pixel position in the binary picture, and the total number of pixels being 4096.
S4, constructing a hybrid neural network model, inputting the characteristics of the picture d obtained by preprocessing in the S2 and the binary picture obtained by the S3 into the hybrid neural network model for training, optimizing parameters of the hybrid neural network model by using a gradient descent method, so that the final model loss is less than a preset condition, outputting the type probability of the patch element, and obtaining the trained hybrid neural network model, wherein the specific process is as follows:
the hybrid neural network model comprises a convolutional neural network, a fully connected neural network and a plurality of SVM (support vector machines), wherein the convolutional neural network and the fully connected neural network are arranged in parallel and are connected with the plurality of SVM support vector machines.
Inputting the picture d obtained by the preprocessing in the S2 into a convolutional neural network, and outputting a characteristic value of the picture d; meanwhile, inputting the characteristics of the binary image obtained in the step S3 into the full-connection neural network, and outputting the characteristic value of the binary image; inputting the characteristic value of the picture d and the characteristic value of the binary picture into each SVM support vector machine to obtain the confidence score of each patch element category, calibrating the confidence scores by using logistic regression, outputting the probability of the category to which the patch element belongs, sequencing the probabilities output by the plurality of SVM support vector machines, and taking the category with the highest probability as the category to which the patch element belongs, namely taking the output of the hybrid neural network model as the category probability of the patch element.
The convolutional neural network sequentially comprises 1 convolutional layer, 1 pooling layer, 1 convolutional layer, 1 pooling layer and 1 full-connection layer, the ReLU is used as an activation unit, and the 2 convolutional layers respectively adopt 64 3x3 convolutional kernels and 128 3x3 convolutional kernels. The fully-connected neural network sequentially comprises 3 fully-connected layers, and the node number of each of the 3 fully-connected layers is 500, 500 and 128.
And S5, sequentially preprocessing and binarizing the picture of the patch element to be classified, extracting the features of the binary picture, and inputting the preprocessed picture and the extracted features of the binary picture into a trained hybrid neural network model to obtain the class probability of the patch element.

Claims (10)

1. A patch element classification method based on fusion features and a hybrid neural network is characterized by comprising the following steps: it comprises the following steps:
s1, collecting a patch element picture, and performing data enhancement on the patch element picture;
s2, preprocessing the patch element picture after data enhancement, wherein the specific process is as follows:
s21, respectively translating a certain image subjected to data enhancement by 5 pixels upwards, downwards, leftwards, rightwards, leftwards upwards, rightwards downwards, rightwards upwards and leftwards downwards to obtain eight translated gray level images, and respectively differencing the eight gray level images with an original image to obtain eight gray level difference images;
s22, calculating a threshold value T for extracting FAST feature points according to the eight gray difference value graphs, and extracting FAST feature points on the patch element pictures corresponding to a certain picture according to the threshold value T;
s23, calculating a minimum enclosing rectangle formed by all FAST characteristic points on the patch element picture in the S22, and respectively expanding the length and the width of the rectangle by 20% to obtain a rectangle I;
s24, zooming the repeated region of the patch element picture and the rectangle I in the S22 by using a linear interpolation method to obtain a preprocessed picture d, wherein the length and the width of the picture d are both 64 pixels;
s3, carrying out binarization processing on the picture d, and extracting the characteristics of the binary picture;
s4, constructing a hybrid neural network model, inputting the characteristics of the picture d obtained by preprocessing in the S2 and the binary picture obtained in the S3 into the hybrid neural network model for training, optimizing parameters of the hybrid neural network model by using a gradient descent method, and outputting the class probability of the patch element to obtain the trained hybrid neural network model;
and S5, sequentially preprocessing and binarizing the picture of the patch element to be classified, extracting the features of the binary picture, and inputting the preprocessed picture and the extracted features of the binary picture into a trained hybrid neural network model to obtain the class probability of the patch element.
2. The method for classifying patch elements based on fusion features and hybrid neural networks according to claim 1, wherein: gather the paster component picture in S1, carry out data enhancement to the paster component picture, concrete process is:
s11, collecting a plurality of patch element pictures, and rotating each patch element picture by angles of-30 degrees, -22 degrees, -15 degrees, -5 degrees, 15 degrees, 22 degrees and 30 degrees to obtain eight rotated pictures a;
s12, gaussian noise with the mean value of 0 and the variances of 0.001, 0.05 and 0.01 is added to each picture a, and three added pictures b are obtained;
s13, establishing a coordinate system by using the central point of each picture b, turning each picture b along the X axis and the Y axis respectively by using the right side of the central point as the X axis and the lower side of the central point as the Y axis to obtain two turned pictures c;
and S14, merging all the pictures c and all the pictures b obtained after overturning to serve as a picture data set.
3. A patch element classification method based on fusion feature and hybrid neural network as claimed in claim 2, wherein: threshold T in S22:
Figure FDA0003931819880000021
wherein, T 1 、T 2 、T 3 、T 4 、T 5 、T 6 、T 7 、T 8 Maximum inter-class variance of the eight gray difference maps respectively; m is a unit of 1 、m 2 Are weights and satisfy m 1 +m 2 =1。
4. A patch element classification method based on fusion feature and hybrid neural network as claimed in claim 3, wherein: and S3, carrying out binarization processing on the picture d, and extracting the characteristics of the binary picture, wherein the specific process is as follows:
s31, carrying out binarization processing on each picture d by using a maximum inter-class variance method to obtain a binary picture;
s32, performing region screening on the binary image to obtain an effective region on the binary image;
s33, calculating the total number of white pixels in the effective area on the binary image and the minimum circumscribed rectangle area formed by all the white pixels, traversing each pixel position of the effective area on the binary image, and marking the pixel position as 1 if the pixel position is a white pixel, otherwise, marking the pixel position as 0;
s34, respectively calculating the proportion of the total number of white pixels in the effective region to the total number of pixels in the binary image, the proportion of the area of the minimum circumscribed rectangle to the area of the binary image, and the proportion of the total number of white pixels to the area of the minimum circumscribed rectangle;
and S35, extracting the features of the binary picture according to the S33 and the S34.
5. The method for classifying patch elements based on fusion features and hybrid neural networks according to claim 4, wherein: in S32, performing region screening on the binary image to obtain an effective region on the binary image, and the specific process is as follows:
dividing the binary picture into a plurality of regions by using a Blob method, extracting information of each region, calculating the area of a white pixel in each region, eliminating the region with the area of the white pixel smaller than 8 to obtain a plurality of regions a, calculating the roundness and the rectangularity of each region a, customizing the range values of the roundness and the rectangularity, eliminating the regions with the roundness and the rectangularity lower than the range values to obtain a plurality of regions b, and taking the plurality of regions b as effective regions of the binary picture.
6. The method for classifying patch elements based on fusion features and hybrid neural networks according to claim 5, wherein: the calculation formula of the roundness is as follows:
Figure FDA0003931819880000022
where P is the roundness of each region b, S is the number of white pixels in each region b, and is also the region area, and L is the contour perimeter of the white pixel region.
7. The method for classifying patch elements based on fusion features and hybrid neural networks according to claim 6, wherein: the calculation formula of the squareness degree is as follows:
Figure FDA0003931819880000023
wherein R is the rectangularity of each region b, S max Is the area of the minimum circumscribed rectangle made up of white pixels.
8. The method for classifying patch elements based on fusion features and hybrid neural networks according to claim 7, wherein: extracting the features of the binary image according to S33 and S34 in S35, wherein the specific process is as follows:
V=[α,β,γ,v 1 ,v 2 ,…,v i ,…,v 4096 ] (4)
v is the characteristic of the binary picture, alpha is the proportion of the total number of white pixels to the total number of pixels in the binary picture, beta is the proportion of the area of the minimum circumscribed rectangle to the area of the binary picture, and gamma is the proportion of the total number of white pixels to the area of the minimum circumscribed rectangle; v. of i I =1,2, \ 8230;, 4096, which is the ith pixel position in a binary picture.
9. A patch element classification method based on fusion feature and hybrid neural network as claimed in claim 8, wherein: s4, the hybrid neural network model comprises a convolutional neural network, a fully-connected neural network and a plurality of SVM (support vector machines), wherein the convolutional neural network and the fully-connected neural network are arranged in parallel and are connected with the SVM support vector machines;
the convolutional neural network sequentially comprises 1 convolutional layer, 1 pooling layer, 1 convolutional layer, 1 pooling layer and 1 full-connection layer, and ReLU is used as an activation unit;
the full-connection neural network sequentially comprises 3 full-connection layers, and the node numbers of the 3 full-connection layers are respectively 500, 500 and 128.
10. The method for classifying patch elements based on fusion features and hybrid neural networks according to claim 9, wherein: and S4, constructing a hybrid neural network model, inputting the characteristics of the picture d obtained by preprocessing in the S2 and the binary picture obtained by preprocessing in the S3 into the hybrid neural network model for training, optimizing parameters of the hybrid neural network model by using a gradient descent method, and outputting the class probability of the patch element to obtain the trained hybrid neural network model, wherein the specific process comprises the following steps of:
inputting the picture d obtained by the preprocessing in the S2 into a convolutional neural network, and outputting a characteristic value of the picture d; meanwhile, inputting the characteristics of the binary image obtained in the step S3 into the fully-connected neural network, and outputting the characteristic value of the binary image; inputting the characteristic value of the picture d and the characteristic value of the binary picture into each SVM support vector machine to obtain the confidence score of each patch element category, calibrating the confidence scores by using logistic regression, outputting the probability of the category to which the patch element belongs, sequencing the probabilities output by the plurality of SVM support vector machines, and taking the category with the highest probability as the category to which the patch element belongs, namely taking the output of the hybrid neural network model as the category probability of the patch element.
CN202211392078.3A 2022-11-08 2022-11-08 Patch element classification method based on fusion features and hybrid neural network Pending CN115620072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211392078.3A CN115620072A (en) 2022-11-08 2022-11-08 Patch element classification method based on fusion features and hybrid neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211392078.3A CN115620072A (en) 2022-11-08 2022-11-08 Patch element classification method based on fusion features and hybrid neural network

Publications (1)

Publication Number Publication Date
CN115620072A true CN115620072A (en) 2023-01-17

Family

ID=84879055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211392078.3A Pending CN115620072A (en) 2022-11-08 2022-11-08 Patch element classification method based on fusion features and hybrid neural network

Country Status (1)

Country Link
CN (1) CN115620072A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794421A (en) * 2015-04-29 2015-07-22 华中科技大学 QR (quick response) code positioning and recognizing methods
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794421A (en) * 2015-04-29 2015-07-22 华中科技大学 QR (quick response) code positioning and recognizing methods
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘伟华: "《基于亚像素特征的贴片元件视觉检测方法研究》", 《万方中国学位论文全文数据库》, pages 34 - 49 *
郝岩 等: "《卷积神经网络在SAR目标识别中的应用》", 《重庆理工大学学报(自然科学)》, vol. 32, no. 05, pages 204 - 209 *

Similar Documents

Publication Publication Date Title
CN107506765B (en) License plate inclination correction method based on neural network
CN108830196A (en) Pedestrian detection method based on feature pyramid network
CN109615604B (en) Part appearance flaw detection method based on image reconstruction convolutional neural network
CN110148117B (en) Power equipment defect identification method and device based on power image and storage medium
CN109472262A (en) Licence plate recognition method, device, computer equipment and storage medium
CN108038435A (en) A kind of feature extraction and method for tracking target based on convolutional neural networks
CN103353881B (en) Method and device for searching application
CN111369526B (en) Multi-type old bridge crack identification method based on semi-supervised deep learning
CN112101364B (en) Semantic segmentation method based on parameter importance increment learning
CN111178312B (en) Face expression recognition method based on multi-task feature learning network
CN111680690A (en) Character recognition method and device
CN112085017A (en) Tea tender shoot image segmentation method based on significance detection and Grabcut algorithm
CN113159064A (en) Method and device for detecting electronic element target based on simplified YOLOv3 circuit board
CN109447117A (en) The double-deck licence plate recognition method, device, computer equipment and storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN107958219A (en) Image scene classification method based on multi-model and Analysis On Multi-scale Features
CN111597845A (en) Two-dimensional code detection method, device and equipment and readable storage medium
CN114255223A (en) Deep learning-based method and equipment for detecting surface defects of two-stage bathroom ceramics
CN110163206B (en) License plate recognition method, system, storage medium and device
CN108876776A (en) A kind of method of generating classification model, eye fundus image classification method and device
CN105844299B (en) A kind of image classification method based on bag of words
CN106709490A (en) Character recognition method and device
CN111611917A (en) Model training method, feature point detection device, feature point detection equipment and storage medium
CN115620072A (en) Patch element classification method based on fusion features and hybrid neural network
CN106845550B (en) Image identification method based on multiple templates

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20230117