CN111145258B - Method for automatically feeding and discharging various kinds of automobile glass by industrial robot - Google Patents

Method for automatically feeding and discharging various kinds of automobile glass by industrial robot Download PDF

Info

Publication number
CN111145258B
CN111145258B CN201911403775.2A CN201911403775A CN111145258B CN 111145258 B CN111145258 B CN 111145258B CN 201911403775 A CN201911403775 A CN 201911403775A CN 111145258 B CN111145258 B CN 111145258B
Authority
CN
China
Prior art keywords
image
layer
pose
region
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911403775.2A
Other languages
Chinese (zh)
Other versions
CN111145258A (en
Inventor
粟华
尹章芹
史婷
张冶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Estun Robotics Co Ltd
Original Assignee
Nanjing Estun Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Estun Robotics Co Ltd filed Critical Nanjing Estun Robotics Co Ltd
Priority to CN201911403775.2A priority Critical patent/CN111145258B/en
Publication of CN111145258A publication Critical patent/CN111145258A/en
Application granted granted Critical
Publication of CN111145258B publication Critical patent/CN111145258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic feeding and discharging method for various automobile glasses of an industrial robot, which comprises the steps of realizing intelligent classification of automobile glass types through an MLP classifier, carrying out template initial positioning by combining image moment characteristics and PCA principal elements, and obtaining a pose with higher precision by using least square method optimization, wherein the algorithm complexity is O (C), and the time complexity of the traditional template positioning algorithm is as high as O (n) 4 ) In contrast, the method greatly improves timeliness, realizes high-efficiency positioning of feeding and discharging of multi-class materials, reduces manual intervention and lowers production cost.

Description

Method for automatically feeding and discharging various kinds of automobile glass by industrial robot
Technical Field
The invention relates to an automatic control method of a robot, in particular to a method for automatically feeding and discharging various automobile glasses of the robot.
Background
With continuous progress of robot technology, in the field of industrial production, more and more feeding and discharging work is completed by robots. The camera collects images, the coordinates of the workpiece under the robot coordinate system can be obtained through image preprocessing, identification, positioning and calibration, the coordinates are sent to the robot through serial port communication, and the robot can finish grabbing and placing so as to realize the automation of feeding and discharging.
In the 3C industry, the workpiece types of the same station are single, and the pose of the workpiece can be obtained only by template matching. However, in an automotive manufacturing line, for example, on the same line, the glass types of automobiles are very numerous, even as many as tens of times, and if recycled for identification and localization by template matching, this calculation becomes very large and extremely time-consuming.
To solve the above problems, there are two solutions: in the process of loading the automobile glass, the category is manually divided into several or tens of large categories, the similarity of the incoming material in each search space and each template is sequentially calculated through template matching, and the category and the pose of the incoming material are confirmed according to the highest similarity, so that the robot is informed. And the other type is that bar codes or two-dimensional codes are added on glass, and the information of the incoming materials is obtained through code scanning, so that the types of the incoming materials are identified, and then the pose of the incoming materials is confirmed through template matching. This method of identification and location can be relatively quick, yet requires the use of bar codes for mating.
The Chinese patent application CN103464383A discloses an industrial robot sorting system and method, wherein a camera collects an image sequence of a geometric workpiece to be sorted on a workpiece placing table into a PC, visual processing software in the PC analyzes the image sequence of the workpiece to be detected according to each frame, automatically recognizes regular geometric workpieces with regular shapes, such as circles, rectangles, triangles and hexagons, calculates relevant characteristics of the regular geometric workpieces, and then starts sorting. It can quickly identify regular shapes, circles, rectangles, triangles, etc. of several workpieces, however, many automotive glass shapes are not describable by these regular shapes, and thus the method is not suitable for irregular automotive glass sorting.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method for automatically loading and unloading various automobile glasses of an industrial robot, which is characterized in that a classifier is trained through machine learning to replace the prior multiple matching to determine the types, and then pose calculation is carried out according to the moment characteristics of images and PCA (Principal component analysis) principal elements. Finally, solving the pose with higher precision by minimizing an error function.
The method realizes the automatic feeding and discharging of various kinds of automobile glass of the industrial robot, and is realized by the following technical scheme:
step 1, training a classifier
1.1, carrying out class marking L on the Image of each material, wherein the formats are < Image, L >, adding the images into training samples, and adding Num groups into each class to form a sample set.
1.2 Gaussian filtering is performed on the sample set Image to smooth noise. The kernel function of gaussian filtering is:
Figure BDA0002348078160000021
where x, y represents the offset from the center, σ controls the radial extent of the gaussian kernel, and the larger the σ value, the larger the local influence range.
1.3, because the background of the glass image serving as a sample is simple, the target area and the background area can be obtained by segmentation through binarization pretreatment, and the target area of the sample is marked as R.
1.4 because glass can translate and rotate in the feeding and discharging processes, 7 region features with rotation translation invariance are extracted when the feature extraction is carried out on the target region R of the sample set image in order to identify the type of the target region, wherein the region features comprise a region Area, a region contour perimeter Length, a rectangle degree rectangle, a minimum circumscribed rectangle width W, a minimum circumscribed rectangle height H, and a center second moment (u 10 ,u 01 );
Note feature vector f= [ Area, length, rectangularity, W, H, u 10 ,u 01 ]The feature vector elements are calculated as follows:
1)
Figure BDA0002348078160000022
wherein m is 00 For the 0-order moment of the image, I (x, y) is the brightness value of the image point (x, y), and M and N respectively represent the height and width of the image;
2)
Figure BDA0002348078160000023
wherein point is the set of contour points on the region R, s is the number of contour points of the region;
3)
Figure BDA0002348078160000024
wherein MSR (R) represents the minimum circumscribed rectangular area of region R;
4) W is the width of the minimum circumscribed rectangle of the region R;
5) H is the height of the minimum circumscribed rectangle of the region R;
6) The central second moment (u 10 ,u 01 ) The calculation mode of (a) is as follows:
Figure BDA0002348078160000031
Figure BDA0002348078160000032
wherein the method comprises the steps of
Figure BDA0002348078160000033
Representing the centroid of an image, i.e->
Figure BDA0002348078160000034
m 10 ,m 01 The first moments of the images x, y, respectively, are shown, i.e
Figure BDA0002348078160000035
Figure BDA0002348078160000036
1.5 training MLP (Multilayer Perceptron) a classifier based on features and sample tags.
MLP is a multi-layer sensing network, the number of network layers is assumed to be n, wherein the first layer of the network is the input characteristic, and the number of nerve units is l 1 For the dimension of the feature vector F, the middle layer is an implicit layer network, n-2 layers are shared, and the number of network nodes is l i According to the input layer, the output layer and sample adjustment, the last layer is the output layer, and the output layer node l n A number of categories for classification; MLP adopts counter propagation algorithm to learn, and nonlinear can be selectedMicro Sigmoid as a function of neuron activation, i.e
Figure BDA0002348078160000037
Wherein z is the value of the activation function input;
the last hidden layer is a multi-category regression problem, and the output of the output layer adopts a softmax function, namely
Figure BDA0002348078160000038
Wherein the method comprises the steps of
Figure BDA0002348078160000039
For the output value of the k node of the n-1 layer, y k A kth node of the output layer;
relevant parameters obtained by the classifier are saved, and later identification only needs to be loaded; if a new variety is added to the sample set, retraining is required.
Step 2, identifying the workpiece category
2.1, collecting images by using a camera module, and performing the operation of the step 1.2 on the collected images to smooth noise.
2.2 dividing the image, and obtaining a target area by adopting the preprocessing method of 1.3.
And 2.3, calculating the feature vector of the target area obtained in the step 2.2 by adopting the step 1.4, using the feature vector as an input layer, and using the classifier C trained in the step 1.5 for recognition, so that the class L of the current material can be output.
Step 3, workpiece positioning
Loading a standard template for identifying the class L, and positioning the current workpiece, namely acquiring the pose of the workpiece, wherein the specific steps are as follows:
3.1 moment features for initial positioning
From Num group data in a certain class L, selecting a standard image as a positioning template, and calculating a target area of the standard templateCentroid of (2)
Figure BDA0002348078160000041
And PCA (Principal Component Analysis) principal element alpha 0 . And calculating the centroid of the current workpiece target area>
Figure BDA0002348078160000042
And PCA principal component alpha 1 From the centroid and PCA changes, a translation parameter (t x ,t y ) And a rotation angle theta, i.e
Figure BDA0002348078160000043
Figure BDA0002348078160000044
θ=α 10
3.2 higher precision positioning optimization
The pose precision obtained by the method meets most field requirements, however, certain high-precision tip equipment production field applications generally have higher precision requirements on the pose, on the basis of the pose obtained by 3.1 calculation, a matching error function d (a) is further optimized, the minimum value of d (a) is solved through a least square method, and the pose a (theta, t) of the current workpiece with higher precision relative to a standard template image is obtained x ,t y ) I.e.
Figure BDA0002348078160000045
Wherein a represents pose parameters including rotation angle θ and translation (t x ,t y ) Information, denoted as a (θ, t x ,t y ),(x i ,y i ) And (x) i ',y i ') the coordinates of the initial matching point pair of the standard template image and the pose of the current workpiece according to the step 3.1, and d (a) represents the pose a (theta, t) x ,t y ) Matching error at the time。
The matching error is minimized by calculation, namely min { d (a) }, and the pose a with higher precision can be obtained 0 (θ,t x ,t y )。
The method of the invention realizes intelligent classification of the automobile glass category based on the MLP classifier, combines the image moment characteristics and PCA to perform template initial positioning, and utilizes least square method optimization to obtain the pose with higher precision, wherein the algorithm complexity is O (C), and the time complexity of the traditional template positioning algorithm is as high as O (n) 4 ) In contrast, the method greatly improves timeliness, realizes high-efficiency positioning of feeding and discharging of multi-class materials, reduces manual intervention and lowers production cost.
Drawings
Fig. 1 is a classifier training flow diagram.
Fig. 2 is a structural diagram of an MLP training network.
FIG. 3 is a flow chart of a method for automatically feeding and discharging various kinds of automobile glass by using the industrial robot.
FIG. 4 is a schematic view of three different types of automotive glass exemplified in the examples of the present invention.
Detailed Description
The process according to the invention is described in further detail below with reference to examples and figures.
Taking 3 different glass feeding and discharging on a certain line as an example, see fig. 4, a specific implementation method of the scheme is described.
Step 1, training a classifier
1.1 there are 3 different glasses, then a total of 3 categories L, each category selecting num=100 images as training samples.
1.2 gaussian filtering is performed on the Image, and the kernel size is 3×3, and σ=0.670, to obtain a gaussian filtered Image1.
1.3 binarizing Image1 obtained in 1.2, and extracting the target region R and the background.
1.4 for the target region R extracted in 1.3, 7 region features with rotation translation invariance, namely region Area, region contour perimeter Length, rectangulThe features, the width W of the minimum bounding rectangle, the height H of the minimum bounding rectangle, the central second moment (u 10 ,u 01 );
Note feature vector f= [ Area, length, rectangularity, W, H, u 10 ,u 01 ]The regional feature vector F of a certain sample in the 3 classes of the sample set i The calculation results of (i=1, 2, 3) are respectively
F 1 =[240158.0,3217.88,648.725,258.812,0.330108,0.0422017,-1.27691e-005]
F 2 =[108871.0,2680.47,481.111,270.077,0.148523,0.187911,2.30686e-005]
F 3 =[244534.0,2566.42,415.659,334.128,0.415405,0.026618,-2.46742e-006]。
1.5 training the MLP classifier according to the characteristics and the sample labels, designing a network structure, and properly reducing the hidden layer number and simplifying the model because of the small number of output samples. The first layer of the network is the input characteristic F, the number of nerve units is l 1 =7, the dimension of F; the second layer is hidden layer network, and the number of nerve units is set according to the empirical value 2 =5; the third layer is an output layer, and the number of nerve units of the output layer is l 3 =3, indicating that the output class is 3 classes. The MLP adopts a back propagation algorithm to learn, and nonlinear microscopic Sigmoid is selected as an activation function of the neuron, so that the classifier C is finally obtained. 2. Identifying a class of workpieces
2.1 collecting images to obtain an image I to be processed, performing Gaussian filtering on the image by using the step 1.2, wherein Gaussian filtering parameters are consistent with those of the step 1.2, and obtaining the filtered image I 1
2.2 pair of images I 1 Binarization was performed, and I was obtained by the method of 1.3 above 1 Target region R of an image 1
2.3 targeting region R 1 Calculating to obtain 7-dimensional feature vectors through the step 1.4
F= [262720.0,4122.65,631.169,303.851,0.345984,0.0334388, -3.59867e-006], taking the feature vector F as an input layer of the classifier C, and using the classifier C obtained in the step 1.5 to identify, wherein the classifier outputs the class L to which the current material belongs, and the class identification success rate reaches 99.9%.
3. Workpiece positioning
And loading a standard template of the identification class L, and positioning the current workpiece to acquire the pose of the workpiece. The specific steps of workpiece positioning are as follows:
step 3.1 moment feature initial positioning
Selecting a standard image from the classes as a locating template, calculating the central position of the standard template and the principal component of PCA (676.0,391.0, -1.7358 DEG), wherein the result can be stored in each class in order to reduce repeated calculation; calculating the centroid of the current workpiece target area and the principal component of PCA (1244.0,1501.0, -1.125 DEG), and calculating the offset (t) according to the change of the centroid and the change of the PCA x ,t y ) And the rotation angle theta, the initial positioning pose (568.0,1110.0,0.610 degrees) is obtained. Wherein the method comprises the steps of
Figure BDA0002348078160000061
Figure BDA0002348078160000062
θ=α1-α0。
Step 3.2 higher precision positioning optimization
According to the initially positioned pose obtained in the step 3.1, the least value of the error function is solved by a least square method, namely
Figure BDA0002348078160000063
A more accurate pose can be obtained. Wherein a represents pose parameters including rotation angle θ and translation (t x ,t y ) Information, denoted as a (θ, t x ,t y ),(x i ,y i ) And (x) i ',y i ') respectively representing the coordinates of the standard template image and the initial matching point pair obtained by the edge point of the current workpiece according to the pose of the step 3.1, and d (a) representsPose a (θ, t) x ,t y ) Matching error at the time. Solving for pose a that minimizes matching error d (a) 0 (θ,t x ,t y ) = (565.695,1112.732,0.601 °) to sub-pixel level accuracy.

Claims (1)

1. A method for automatically feeding and discharging various kinds of automobile glass by an industrial robot comprises the following steps:
step 1, training a classifier
Step 1.1, carrying out class marking L on an Image of each material, wherein the formats of the Image are less than Image and L >, adding the Image into a training sample, and adding a Num group into each class to form a sample set;
step 1.2, performing Gaussian filtering on the Image of the sample set, and performing noise smoothing, wherein the kernel function of the Gaussian filtering is as follows:
Figure QLYQS_1
wherein x and y represent offset relative to the center, sigma controls the radial range of the Gaussian kernel function, and the larger the sigma value is, the larger the local influence range is;
step 1.3, because the background of the glass image serving as a sample is simple, the target area and the background area can be obtained by segmentation through binarization pretreatment, and the target area of the sample is marked as R;
step 1.4, extracting features of a target region R of the sample set image, wherein the target region R comprises 7 region features with rotation translation invariance: area, area contour perimeter Length, rectangle degree rectangle, width W of minimum bounding rectangle, height H of minimum bounding rectangle, center second moment of 2 images (u 10 ,u 01 );
Note feature vector f= [ Area, length, rectangularity, W, H, u 10 ,u 01 ]The feature vector elements are calculated as follows:
1)
Figure QLYQS_2
wherein m is 00 For the 0-order moment of the image, I (x, y) is the brightness value of the image point (x, y), and M and N respectively represent the height and width of the image;
2)
Figure QLYQS_3
wherein point is the set of contour points on the region R, s is the number of contour points of the region;
3)
Figure QLYQS_4
wherein MSR (R) represents the minimum circumscribed rectangular area of region R;
4) W is the width of the minimum circumscribed rectangle of the region R;
5) H is the height of the minimum circumscribed rectangle of the region R;
6) The central second moment (u 10 ,u 01 ) The calculation mode of (a) is as follows:
Figure QLYQS_5
Figure QLYQS_6
wherein the method comprises the steps of
Figure QLYQS_7
Representing the centroid of an image, i.e->
Figure QLYQS_8
m 10 ,m 01 The first moments of the images x, y, respectively, are shown, i.e
Figure QLYQS_9
/>
Figure QLYQS_10
Step 1.5 training MLP (Multilayer Perceptron) classifier based on features and sample tags
MLP is a multi-layer sensing network, and the number of network layers is assumed to be n, wherein the first layer of the network is the input characteristic, and the number of nerve units in the layer is l 1 For the dimension of the feature vector F, the middle layer is an implicit layer network, n-2 layers are shared, and the number of network nodes is l i According to the input layer, the output layer and sample adjustment, the last layer is the output layer, and the output layer node l n A number of categories for classification; MLP adopts counter propagation algorithm to learn, and adopts nonlinear microscopic Sigmoid as activation function of neuron, namely
Figure QLYQS_11
Where z is the value of the activation function input;
the last hidden layer is connected with the output layer, and the output of the output layer is selected from softmax function, namely
Figure QLYQS_12
Wherein the method comprises the steps of
Figure QLYQS_13
Is the output value of the (k) node of the (n-1) th layer, y k A kth node of the output layer;
relevant parameters obtained by the classifier are saved, and later identification only needs to be loaded; if a new variety is added into the sample set, retraining is needed;
step 2, identifying the workpiece category
Step 2.1, collecting images by using a camera module, and performing the operation of the step 1.2 on the collected images to smooth noise;
step 2.2, dividing the image, and obtaining a target area by adopting the preprocessing method of step 1.3;
step 2.3, calculating a feature vector of the target area by adopting the step 1.4, using the feature vector as an input layer, and using the classifier C trained in the step 1.5 for recognition, so as to output the class L to which the current material belongs;
step 3, workpiece positioning
Loading a standard template of an identification class L, positioning a current workpiece, and acquiring the pose of the workpiece, wherein the specific steps are as follows:
step 3.1 moment feature initial positioning
From Num group data in a certain class L, selecting a standard image as a locating template, and calculating the mass center of a target area of the standard template
Figure QLYQS_14
And PCA (Principal Component Analysis) principal element alpha 0 Then calculate the mass center of the target area of the current workpiece +.>
Figure QLYQS_15
And PCA principal component alpha 1 From the centroid and PCA principal component variations, a translation parameter (t x ,t y ) And a rotation angle theta, i.e
Figure QLYQS_16
Figure QLYQS_17
θ=α 10
Step 3.2 higher precision positioning optimization
Designing a matching error function d (a), and solving the minimum value of d (a) through a least square method to obtain the pose with higher precision:
Figure QLYQS_18
wherein a represents pose parameters including rotation angle θ and translation (t x ,t y ) Information, denoted as a (θ, t x ,t y ),(x i ,y i ) And (x' i ,y′ i ) Respectively representing the coordinates of a standard template image and the initial matching point pair of the current workpiece according to the pose of the step 3.1, and d (a) represents the pose a (theta, t) x ,t y ) Matching error at the time;
the matching error is minimized by calculation, namely min { d (a) }, and the pose a with higher precision can be obtained 0 (θ,t x ,t y )。
CN201911403775.2A 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot Active CN111145258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403775.2A CN111145258B (en) 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403775.2A CN111145258B (en) 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot

Publications (2)

Publication Number Publication Date
CN111145258A CN111145258A (en) 2020-05-12
CN111145258B true CN111145258B (en) 2023-06-02

Family

ID=70522386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403775.2A Active CN111145258B (en) 2019-12-31 2019-12-31 Method for automatically feeding and discharging various kinds of automobile glass by industrial robot

Country Status (1)

Country Link
CN (1) CN111145258B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651825A (en) * 2015-11-03 2017-05-10 中国科学院沈阳计算技术研究所有限公司 Workpiece positioning and identification method based on image segmentation
WO2019127306A1 (en) * 2017-12-29 2019-07-04 Beijing Airlango Technology Co., Ltd. Template-based image acquisition using a robot
CN110428464A (en) * 2019-06-24 2019-11-08 浙江大学 Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method
CN110570393A (en) * 2019-07-31 2019-12-13 华南理工大学 mobile phone glass cover plate window area defect detection method based on machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590498B (en) * 2017-09-27 2020-09-01 哈尔滨工业大学 Self-adaptive automobile instrument detection method based on character segmentation cascade two classifiers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651825A (en) * 2015-11-03 2017-05-10 中国科学院沈阳计算技术研究所有限公司 Workpiece positioning and identification method based on image segmentation
WO2019127306A1 (en) * 2017-12-29 2019-07-04 Beijing Airlango Technology Co., Ltd. Template-based image acquisition using a robot
CN110428464A (en) * 2019-06-24 2019-11-08 浙江大学 Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method
CN110570393A (en) * 2019-07-31 2019-12-13 华南理工大学 mobile phone glass cover plate window area defect detection method based on machine vision

Also Published As

Publication number Publication date
CN111145258A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN110314854B (en) Workpiece detecting and sorting device and method based on visual robot
CN111062915B (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN106650721B (en) A kind of industrial character identifying method based on convolutional neural networks
CN112037219B (en) Metal surface defect detection method based on two-stage convolutional neural network
Guan et al. A steel surface defect recognition algorithm based on improved deep learning network model using feature visualization and quality evaluation
CN106251353A (en) Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof
CN110806736A (en) Method for detecting quality information of forge pieces of die forging forming intelligent manufacturing production line
CN110929713B (en) Steel seal character recognition method based on BP neural network
CN112497219B (en) Columnar workpiece classifying and positioning method based on target detection and machine vision
CN108846415A (en) The Target Identification Unit and method of industrial sorting machine people
CN113538486A (en) Method for improving identification and positioning accuracy of automobile sheet metal workpiece
CN107622276B (en) Deep learning training method based on combination of robot simulation and physical sampling
CN112232399A (en) Automobile seat defect detection method based on multi-feature fusion machine learning
CN108876765A (en) The target locating set and method of industrial sorting machine people
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN116061187B (en) Method for identifying, positioning and grabbing goods on goods shelves by composite robot
CN115830359A (en) Workpiece identification and counting method based on target detection and template matching in complex scene
CN109919154B (en) Intelligent character recognition method and device
CN111523342A (en) Two-dimensional code detection and correction method in complex scene
CN114913346A (en) Intelligent sorting system and method based on product color and shape recognition
CN117085969B (en) Artificial intelligence industrial vision detection method, device, equipment and storage medium
CN111145258B (en) Method for automatically feeding and discharging various kinds of automobile glass by industrial robot
CN117381793A (en) Material intelligent detection visual system based on deep learning
CN111950556A (en) License plate printing quality detection method based on deep learning
CN108986090A (en) A kind of depth convolutional neural networks method applied to the detection of cabinet surface scratch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant