CN111145258A - Automatic feeding and discharging method for various automobile glasses of industrial robot - Google Patents

Automatic feeding and discharging method for various automobile glasses of industrial robot Download PDF

Info

Publication number
CN111145258A
CN111145258A CN201911403775.2A CN201911403775A CN111145258A CN 111145258 A CN111145258 A CN 111145258A CN 201911403775 A CN201911403775 A CN 201911403775A CN 111145258 A CN111145258 A CN 111145258A
Authority
CN
China
Prior art keywords
image
layer
pose
region
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911403775.2A
Other languages
Chinese (zh)
Other versions
CN111145258B (en
Inventor
粟华
尹章芹
史婷
张冶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Estun Robotics Co Ltd
Original Assignee
Nanjing Estun Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Estun Robotics Co Ltd filed Critical Nanjing Estun Robotics Co Ltd
Priority to CN201911403775.2A priority Critical patent/CN111145258B/en
Publication of CN111145258A publication Critical patent/CN111145258A/en
Application granted granted Critical
Publication of CN111145258B publication Critical patent/CN111145258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic loading and unloading method for multiple kinds of automobile glass of an industrial robot, which is characterized in that the intelligent classification of automobile glass categories is realized through an MLP classifier, template primary positioning is carried out by combining image moment characteristics and PCA principal components, then least square method optimization is utilized, higher-precision pose is obtained, the algorithm complexity is O (C), and the time complexity of the traditional template positioning algorithm is up to O (n)4) Compared with the prior art, the method greatly improves the timeliness, realizes the efficient positioning of the feeding and discharging of the multi-class materials, reduces the manual intervention and reduces the production cost.

Description

Automatic feeding and discharging method for various automobile glasses of industrial robot
Technical Field
The invention relates to an automatic control method of a robot, in particular to a method for automatic loading and unloading of various automobile glass by the robot.
Background
With the continuous progress of the robot technology, more and more loading and unloading work is completed by the robot in the industrial production field. The camera collects images, coordinates of the workpiece under a robot coordinate system can be obtained through image preprocessing, recognition, positioning and calibration, the coordinates are sent to the robot through serial port communication, and the robot can complete grabbing and placing so as to achieve automation of feeding and discharging.
In the 3C industry, workpieces at the same station are single in type, and the pose of the workpieces can be obtained only by template matching. However, if the glass of a car is very many, even dozens of, and the car is identified and located by template matching in the same production line, the calculation amount becomes very large and time-consuming.
To solve the above problems, there are two solutions: the method is characterized in that during the process of loading automobile glass, categories are manually classified into a plurality of categories or dozens of categories, the similarity between the incoming material and each template in each search space is sequentially calculated through template matching, and the category and the pose of the incoming material are confirmed according to the highest similarity to inform a robot. The other method is to add a bar code or a two-dimensional code on the glass, acquire the incoming material information by scanning the code, identify the type of the incoming material, and then confirm the pose of the incoming material by template matching. This method of identification and location is relatively fast, however, requires the use of bar codes for cooperation.
Chinese patent application CN103464383A discloses a sorting system and method for industrial robots, wherein a camera collects the image sequence of the geometric workpieces to be sorted on a workpiece placing table into a PC, the image sequence of the workpieces to be sorted is analyzed by visual processing software in the PC according to each frame, and the regular geometric workpieces with regular shapes, such as round, rectangular, triangular and hexagonal, are automatically identified, and the relevant characteristics are calculated at the same time, and then sorting is started. The method can quickly identify regular workpieces with conventional shapes, such as circles, rectangles, triangles and the like, however, the shapes of many automobile glass cannot be described by the regular shapes, so that the method is not suitable for irregular automobile glass sorting.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an automatic loading and unloading method for various automobile glasses of an industrial robot. And finally, solving the pose with higher precision by minimizing an error function.
The method realizes automatic feeding and discharging of various automobile glass by the industrial robot, and is realized by the following technical scheme:
step 1, training classifier
1.1, carrying out class marking on the Image of each material, wherein the format is < Image, L >, adding the Image into a training sample, and adding a Num group into each class to form a sample set.
1.2, Gaussian filtering is carried out on the sample set Image for noise smoothing. The kernel function of gaussian filtering is:
Figure BDA0002348078160000021
where x, y represents the offset from the center, σ controls the radial extent of the gaussian kernel, and larger values of σ represent larger local areas of influence.
1.3 because the background of the glass image as the sample is simple, the target area and the background area can be obtained by segmentation after binarization pretreatment, and the target area of the sample is marked as R.
1.4 because glass can take place translation and rotation in the material loading and unloading process, in order to discern the kind of target region, when carrying out feature extraction to target region R of sample set image, extract 7 regional characteristics that have the rototranslation invariance, including regional Area, regional outline girth Length, rectangular degree rectangular, the width W of minimum circumscribed rectangle, the height H of minimum circumscribed rectangle, the central second moment (u) of 2 images10,u01);
Let the feature vector F ═ Area, Length, Rectangularity, W, H, u10,u01]The calculation mode of each element of the feature vector is as follows:
1)
Figure BDA0002348078160000022
wherein m is00Is the 0 th moment of the image, I (x, y) is the intensity value of the image point (x, y), M and N represent the height and width of the image, respectively;
2)
Figure BDA0002348078160000023
wherein point is a set of contour points on the region R, and s is the number of contour points of the region;
3)
Figure BDA0002348078160000024
wherein MSR (R) represents the minimum circumscribed rectangular area of region R;
4) w is the width of the minimum circumscribed rectangle of the region R;
5) h is the height of the minimum circumscribed rectangle of the region R;
6) central second moment (u) of image10,u01) The calculation method is as follows:
Figure BDA0002348078160000031
Figure BDA0002348078160000032
wherein
Figure BDA0002348078160000033
Representing the centroid of the image, i.e.
Figure BDA0002348078160000034
m10,m01Representing first moments of the images x, y, respectively, i.e.
Figure BDA0002348078160000035
Figure BDA0002348078160000036
1.5 training MLP (Multi layer Perceptron) classifier according to feature and sample label.
MLP (multi-layer perception network), namely a multi-layer perception network, is provided that the number of network layers is n, wherein the first layer of the network is input characteristics, and the number of neural units is l1The intermediate layer is an implicit layer network with n-2 layers in total and the number of network nodes is liAdjusting according to input layer, output layer and sample, the last layer is output layer, and output layer nodenThe number of categories to be classified; MLP adopts back propagation algorithm to learn, selects nonlinear microminiature Sigmoid as activation function of neuron, namely
Figure BDA0002348078160000037
Wherein z is the value of the activation function input;
the last layer from the hidden layer to the output layer is a multi-class regression problem, and the output of the output layer adopts a softmax function, namely
Figure BDA0002348078160000038
Wherein
Figure BDA0002348078160000039
Is the output value of the kth node of the n-1 layer, ykIs the kth node of the output layer;
relevant parameters obtained by the classifier are saved, and only loading is needed for later identification; if a new variety is added to the sample set, retraining is required.
Step 2, identifying workpiece category
2.1, the camera module is used for collecting images, and the operation of the step 1.2 is carried out on the collected images for noise smoothing.
And 2.2, segmenting the image, and obtaining the target area by adopting the preprocessing method 1.3.
And 2.3, calculating the feature vector of the target area obtained in the step 2.2 by adopting the step 1.4, taking the feature vector as an input layer, and identifying by using the classifier C trained in the step 1.5, so that the class L to which the current material belongs can be output.
Step 3, positioning the workpiece
Loading a standard template for identifying the type L, positioning the current workpiece, namely acquiring the pose of the workpiece, and specifically comprising the following steps of:
3.1 moment features for initial positioning
Selecting a standard image as a positioning template from Num group data in a certain class L, and calculating the mass center of a target area of the standard template
Figure BDA0002348078160000041
And PCA (principal Component analysis) principal Component α0. And calculating the centroid of the current workpiece target area
Figure BDA0002348078160000042
And PCA principal component α1Calculating a translation parameter (t) according to the change of the mass center and the change of the PCAx,ty) And a rotation angle theta, i.e.
Figure BDA0002348078160000043
Figure BDA0002348078160000044
θ=α10
3.2 higher precision positioning optimization
The pose accuracy obtained by the method meets most field requirements, however, some high-precision equipment production field application generally has higher accuracy requirements on the pose, the method is further optimized on the basis of the pose obtained by 3.1 calculation, a matching error function d (a) is designed, the minimum value of the d (a) is solved by a least square method, and the pose a (theta, t) with higher accuracy of the current workpiece relative to the standard template image is obtainedx,ty) I.e. by
Figure BDA0002348078160000045
Wherein a represents a pose parameter including a rotation angle theta and a translation (t)x,ty) Information, denoted as a (θ, t)x,ty),(xi,yi) And (x)i',yi') the coordinates of the primary matching point pair of the standard template image and the current workpiece according to the pose of step 3.1, d (a) the pose a (theta, t)x,ty) The match error in time.
The matching error is minimized by calculation, namely min { d (a) }, and the pose a with higher precision can be obtained0(θ,tx,ty)。
The method of the invention realizes intelligent classification of automobile glass categories based on an MLP classifier, performs template initial positioning by combining image moment features and PCA, and obtains a pose with higher precision by using least square optimization, wherein the algorithm complexity is O (C), while the time complexity of the traditional template positioning algorithm is up to O (n)4) Compared with the prior art, the method greatly improves the timeliness, realizes the efficient positioning of the feeding and discharging of the multi-class materials, reduces the manual intervention and reduces the production cost.
Drawings
FIG. 1 is a classifier training flow diagram.
Fig. 2 is a diagram of an MLP training network architecture.
FIG. 3 is a flow chart of the method for automatic loading and unloading of various automobile glass by the industrial robot.
FIG. 4 is a schematic view of three different types of automotive glass according to an embodiment of the present invention.
Detailed Description
The process of the present invention will be described in further detail below with reference to examples and the accompanying drawings.
Taking 3 different glass loading and unloading materials on a certain line as an example, a specific implementation method of the scheme is illustrated in a figure 4.
Step 1, training classifier
1.1 there are 3 different glasses, and 3 categories L are assigned, and each category selects a training sample of which Num is 100 images Image.
1.2 gaussian-filtered Image1 is obtained by applying gaussian filtering to the Image, with a kernel size of 3 × 3 and σ of 0.670.
1.3 the Image1 obtained in the above step 1.2 is binarized to extract the target region R and the background.
1.4 for the target region R extracted in the above 1.3, 7 region features with rotational and translational invariance, namely region Area, region contour perimeter Length, Rectangularity, width W of minimum bounding rectangle, height H of minimum bounding rectangle, and central second moment (u) of 2 images are extracted10,u01);
Let the feature vector F ═ Area, Length, Rectangularity, W, H, u10,u01]Then, the area feature vector F of a certain sample of the 3 classes of the sample seti(i-1, 2,3) the results of the calculation are
F1=[240158.0,3217.88,648.725,258.812,0.330108,0.0422017,-1.27691e-005]
F2=[108871.0,2680.47,481.111,270.077,0.148523,0.187911,2.30686e-005]
F3=[244534.0,2566.42,415.659,334.128,0.415405,0.026618,-2.46742e-006]。
1.5 training MLP classifier according to the characteristics and sample labels, designing network structure, properly reducing the number of hidden layer layers and simplifying the model because the number of output samples is less. The first layer of the network is the input characteristic F and the number l of the neural units17, i.e., the dimension of F; the second layer is an implicit layer network, and the number l of the neural units is set according to an empirical value2(ii) 5; the third layer is an output layer, and the number of the nerve units of the output layer is l3The output category is 3 in total. And the MLP adopts a back propagation algorithm to learn, selects nonlinear differentiable Sigmoid as an activation function of the neuron and finally obtains the classifier C. 2. Identifying workpiece classes
2.1, acquiring an image to be processed to obtain an image I, and carrying out Gaussian filtering on the image by using the step 1.2, wherein the Gaussian filtering parameters are consistent with the step 1.2 to obtain a filtered image I1
2.2 pairs of images I1Performing binarization by the method 1.3 to obtain I1Target region R of image1
2.3 targeting region R1And calculating to obtain 7-dimensional feature vectors through step 1.4
And F is [262720.0,4122.65,631.169,303.851,0.345984,0.0334388, -3.59867e-006], the feature vector F is used as an input layer of the classifier C, the classifier C obtained in the step 1.5 is used for identification, the classifier outputs the class L to which the current material belongs, and the class identification success rate reaches 99.9%.
3. Workpiece positioning
And loading a standard template for identifying the type L, and positioning the current workpiece to acquire the pose of the workpiece. The specific steps of workpiece positioning are as follows:
step 3.1 moment feature for initial positioning
Selecting standard images from the category as positioned templates, calculating the center positions of the standard templates and PCA pivot (676.0,391.0, -1.7358), and storing the result in each category in order to reduce repeated calculation; calculating the centroid and PCA principal component (1244.0,1501.0, -1.125) of the current workpiece target area, calculating the offset (t) according to the change of the centroid and the change of PCAx,ty) And the angle theta is rotated to obtain the pose (568.0,1110.0,0.610 deg.) of the initial positioning. Wherein
Figure BDA0002348078160000061
Figure BDA0002348078160000062
θ=α1-α0。
Step 3.2 higher precision positioning optimization
According to the pose of the initial positioning obtained in the step 3.1, solving the minimum value of an error function by a least square method, namely
Figure BDA0002348078160000063
A more accurate pose can be obtained. Wherein a is tableThe posture parameters comprise a rotation angle theta and a translation (t)x,ty) Information, denoted as a (θ, t)x,ty),(xi,yi) And (x)i',yi') respectively represents the coordinates of the initial matching point pair obtained by the edge points of the standard template image and the current workpiece according to the pose in the step 3.1, and d (a) represents the pose a (theta, t)x,ty) The match error in time. Solving the pose a with the minimum matching error d (a)0(θ,tx,ty) To a sub-pixel level of accuracy (565.695,1112.732,0.601 °).

Claims (1)

1. A method for automatic loading and unloading of various automobile glass by an industrial robot comprises the following steps:
step 1, training classifier
Step 1.1, carrying out class marking on the Image of each material, wherein the format is < Image, L >, adding the Image into a training sample, and adding a Num group into each class to form a sample set;
step 1.2, Gaussian filtering is carried out on the Image of the sample set, noise smoothing is carried out, and the kernel function of the Gaussian filtering is as follows:
Figure FDA0002348078150000011
wherein x and y represent offset relative to the center, sigma controls the radial range of the Gaussian kernel function, and the larger the sigma value is, the larger the local influence range is;
step 1.3, because the background of the glass image as the sample is simple, the target area and the background area can be obtained by segmentation after binarization pretreatment, and the target area of the sample is marked as R;
step 1.4, extracting the characteristics of a target region R of a sample set image, wherein the target region R comprises 7 region characteristics with rotational translation invariance: area, perimeter of Area outline, Length, Rectangularity, width W of minimum bounding rectangle, height H of minimum bounding rectangle, central second moment of 2 images (u)10,u01);
Let feature vector F ═ Area, Length, Rectangularity,W,H,u10,u01]The calculation mode of each element of the feature vector is as follows:
1)
Figure FDA0002348078150000012
wherein m is00Is the 0 th moment of the image, I (x, y) is the intensity value of the image point (x, y), M and N represent the height and width of the image, respectively;
2)
Figure FDA0002348078150000013
wherein point is a set of contour points on the region R, and s is the number of contour points of the region;
3)
Figure FDA0002348078150000014
wherein MSR (R) represents the minimum circumscribed rectangular area of region R;
4) w is the width of the minimum circumscribed rectangle of the region R;
5) h is the height of the minimum circumscribed rectangle of the region R;
6) central second moment (u) of image10,u01) The calculation method is as follows:
Figure FDA0002348078150000021
Figure FDA0002348078150000022
wherein
Figure FDA0002348078150000023
Representing the centroid of the image, i.e.
Figure FDA0002348078150000024
m10,m01Representing first moments of the images x, y, respectively, i.e.
Figure FDA0002348078150000025
Figure FDA0002348078150000026
Step 1.5 training MLP (Multi layer Perceptron) classifier according to features and sample labels
MLP (Multi-layer perception network) is a multi-layer perception network, and the number of network layers is assumed to be n, wherein the first layer of the network is input characteristics, and the number of neural units in the layer is l1The intermediate layer is an implicit layer network with n-2 layers in total and the number of network nodes is liAdjusting according to input layer, output layer and sample, the last layer is output layer, and output layer nodenThe number of categories to be classified; MLP adopts back propagation algorithm to learn, selects nonlinear microminiature Sigmoid as activation function of neuron, namely
Figure FDA0002348078150000027
Wherein z is the value of the activation function input;
the last layer is from a hidden layer to an output layer, and the output of the output layer adopts a softmax function, namely
Figure FDA0002348078150000028
Wherein
Figure FDA0002348078150000029
Is the output value of the kth node of the n-1 th layer, ykIs the kth node of the output layer;
relevant parameters obtained by the classifier are saved, and only loading is needed for later identification; if a new variety is added into the sample set, retraining is needed;
step 2, identifying workpiece category
Step 2.1, acquiring an image by using a camera module, and performing the operation of the step 1.2 on the acquired image to smooth noise;
step 2.2, segmenting the image, and obtaining a target area by adopting the preprocessing method in the step 1.3;
step 2.3, calculating the feature vector of the target area by adopting the step 1.4, using the feature vector as an input layer, and identifying by using the classifier C trained in the step 1.5, so as to output the class L to which the current material belongs;
step 3, positioning the workpiece
Loading a standard template for identifying the type L, positioning the current workpiece, and acquiring the pose of the workpiece, wherein the method specifically comprises the following steps:
step 3.1 moment feature for initial positioning
Selecting a standard image as a positioning template from Num group data in a certain class L, and calculating the mass center of a target area of the standard template
Figure FDA0002348078150000031
And PCA (principal Component analysis) principal Component α0Then calculating the mass center of the current workpiece target area
Figure FDA0002348078150000032
And PCA principal component α1The translation parameter (t) may be calculated from the change in the centroid and PCA principal componentsx,ty) And a rotation angle theta, i.e.
Figure FDA0002348078150000033
Figure FDA0002348078150000034
θ=α10
Step 3.2 higher precision positioning optimization
Designing a matching error function d (a), solving the minimum value of d (a) by a least square method, and acquiring a pose with higher precision:
Figure FDA0002348078150000035
wherein a represents a pose parameter including a rotation angle theta and a translation (t)x,ty) Information, denoted as a (θ, t)x,ty),(xi,yi) And (x'i,y′i) Respectively representing the coordinates of the initial matching point pair of the standard template image and the pose of the current workpiece according to the step 3.1, d (a) representing the pose a (theta, t)x,ty) The match error in time.
The matching error is minimized by calculation, namely min { d (a) }, and the pose a with higher precision can be obtained0(θ,tx,ty)。
CN201911403775.2A 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot Active CN111145258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403775.2A CN111145258B (en) 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403775.2A CN111145258B (en) 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot

Publications (2)

Publication Number Publication Date
CN111145258A true CN111145258A (en) 2020-05-12
CN111145258B CN111145258B (en) 2023-06-02

Family

ID=70522386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403775.2A Active CN111145258B (en) 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot

Country Status (1)

Country Link
CN (1) CN111145258B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651825A (en) * 2015-11-03 2017-05-10 中国科学院沈阳计算技术研究所有限公司 Workpiece positioning and identification method based on image segmentation
US20190095739A1 (en) * 2017-09-27 2019-03-28 Harbin Institute Of Technology Adaptive Auto Meter Detection Method based on Character Segmentation and Cascade Classifier
WO2019127306A1 (en) * 2017-12-29 2019-07-04 Beijing Airlango Technology Co., Ltd. Template-based image acquisition using a robot
CN110428464A (en) * 2019-06-24 2019-11-08 浙江大学 Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method
CN110570393A (en) * 2019-07-31 2019-12-13 华南理工大学 mobile phone glass cover plate window area defect detection method based on machine vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651825A (en) * 2015-11-03 2017-05-10 中国科学院沈阳计算技术研究所有限公司 Workpiece positioning and identification method based on image segmentation
US20190095739A1 (en) * 2017-09-27 2019-03-28 Harbin Institute Of Technology Adaptive Auto Meter Detection Method based on Character Segmentation and Cascade Classifier
WO2019127306A1 (en) * 2017-12-29 2019-07-04 Beijing Airlango Technology Co., Ltd. Template-based image acquisition using a robot
CN110428464A (en) * 2019-06-24 2019-11-08 浙江大学 Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method
CN110570393A (en) * 2019-07-31 2019-12-13 华南理工大学 mobile phone glass cover plate window area defect detection method based on machine vision

Also Published As

Publication number Publication date
CN111145258B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN110314854B (en) Workpiece detecting and sorting device and method based on visual robot
CN108918536B (en) Tire mold surface character defect detection method, device, equipment and storage medium
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN108345911B (en) Steel plate surface defect detection method based on convolutional neural network multi-stage characteristics
CN112037219B (en) Metal surface defect detection method based on two-stage convolutional neural network
CN108074231B (en) Magnetic sheet surface defect detection method based on convolutional neural network
Guan et al. A steel surface defect recognition algorithm based on improved deep learning network model using feature visualization and quality evaluation
CN110806736A (en) Method for detecting quality information of forge pieces of die forging forming intelligent manufacturing production line
CN112232399B (en) Automobile seat defect detection method based on multi-feature fusion machine learning
CN111062915A (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN109308489B (en) Method for detecting welding quality of element arc welding
CN109344825A (en) A kind of licence plate recognition method based on convolutional neural networks
CN110929713B (en) Steel seal character recognition method based on BP neural network
CN107328787A (en) A kind of metal plate and belt surface defects detection system based on depth convolutional neural networks
CN111798419A (en) Metal paint spraying surface defect detection method
CN109145964B (en) Method and system for realizing image color clustering
CN112497219B (en) Columnar workpiece classifying and positioning method based on target detection and machine vision
Daood et al. Sequential recognition of pollen grain Z-stacks by combining CNN and RNN
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN112232263A (en) Tomato identification method based on deep learning
CN115830359A (en) Workpiece identification and counting method based on target detection and template matching in complex scene
CN114820471A (en) Visual inspection method for surface defects of intelligent manufacturing microscopic structure
CN116071560A (en) Fruit identification method based on convolutional neural network
CN114994051A (en) Intelligent integrated real-time detection system for punching of automobile numerical control forged part

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant