CN108766894B - A kind of chip attachment method and system of robot vision guidance - Google Patents

A kind of chip attachment method and system of robot vision guidance Download PDF

Info

Publication number
CN108766894B
CN108766894B CN201810582133.2A CN201810582133A CN108766894B CN 108766894 B CN108766894 B CN 108766894B CN 201810582133 A CN201810582133 A CN 201810582133A CN 108766894 B CN108766894 B CN 108766894B
Authority
CN
China
Prior art keywords
chip
image
robot
area
industrial camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810582133.2A
Other languages
Chinese (zh)
Other versions
CN108766894A (en
Inventor
王耀南
马聪
贾林
彭伟星
吴昊天
钱珊珊
张煜
田吉委
李娟慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201810582133.2A priority Critical patent/CN108766894B/en
Publication of CN108766894A publication Critical patent/CN108766894A/en
Application granted granted Critical
Publication of CN108766894B publication Critical patent/CN108766894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/02Manufacture or treatment of semiconductor devices or of parts thereof
    • H01L21/04Manufacture or treatment of semiconductor devices or of parts thereof the devices having potential barriers, e.g. a PN junction, depletion layer or carrier concentration layer
    • H01L21/50Assembly of semiconductor devices using processes or apparatus not provided for in a single one of the subgroups H01L21/06 - H01L21/326, e.g. sealing of a cap to a base of a container
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/67Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere
    • H01L21/67005Apparatus not specifically provided for elsewhere
    • H01L21/67011Apparatus for manufacture or treatment
    • H01L21/67126Apparatus for sealing, encapsulating, glassing, decapsulating or the like

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Engineering (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of chip attachment method and system of robot vision guidance, one second industrial camera is added in PCB placement region and chip placement region, robot absorption chip moves at second industrial camera, the image of chip after shooting is drawn, differential seat angle after the angle and absorption chip of comparable chip pasting area and trunnion axis between chip and the angle of trunnion axis, end effector is set to rotate the differential seat angle at P3, carry out angle compensation, and it is compensated on X and Y-axis, the slippage errors generated during improving mobile after absorption chip or absorption, improve placement accuracy;Meanwhile the present invention uses six-joint robot, because six-joint robot has better freedom degree and flexibility, realizes the chip attachment of cavity class inside workpiece, accomplishes flexible production;Along with the first industrial camera is arranged in robot hand, it is easier to move, so that this location-based visual spatial attention method of the present invention has higher flexibility, the scope of application is wider.

Description

Chip mounting method and system guided by robot vision
Technical Field
The invention belongs to the technical field of robot vision assembly, and particularly relates to a chip mounting method and system guided by robot vision.
Background
China is a big electronic manufacturing country, and according to the data of the national statistical bureau, the number of years is 2017, China has nearly 15000 families of common electronic manufacturing enterprises, the revenue of electronic manufacturing industry owners is nearly 10 trillion yuan, the investment amount of fixed assets breaks through trillion customs, and the newly added fixed assets can reach more than 20 percent for many years. In addition, the electronic assembly has the characteristics of small workpiece volume, various types, complex assembly process, high assembly precision requirement and the like, so that in the whole electronic manufacturing process, the assembly working time accounts for about 40-60% of the total electronic manufacturing time, and the assembly working amount accounts for about 50-70% of the total working amount.
Currently, the main devices for chip mounting are chip mounters such as CP45F/FV of Samsung, ASSEM-BLENON, Topazxi II, Fuji QF132E and CP-733E, and the like. Although the chip mounter has higher application rate in the field of chip mounting, the chip mounter has the limitations that the precision of a matched vision system is not high, the position correction is not accurate, and the internal mounting of cavity workpieces cannot be realized, so that the flexible production is difficult to realize.
In recent years, assembly robots are continuously developed, the precision and flexibility of robot assembly are greatly improved by using flexible wrists and flexible claws, and some work which cannot be finished by a chip mounter can be finished. However, with the complexity of the assembly process and the continuous improvement of the assembly precision, the traditional assembly robot cannot meet the current electronic manufacturing requirements due to the lack of perception and adaptive control capabilities, and machine vision can simulate the human visual function, complete the identification of objective objects, and extract target information from images for processing analysis and understanding, so that the results are used for positioning, measurement and the like. Therefore, the machine vision technology is applied to the assembly robot, and the execution point of the assembly robot can be dynamically adjusted according to the actual position of the workpiece, so that the assembly robot can be accurately positioned and intelligently picked and placed, and the assembly requirement is met. The integration of machine vision and assembly robots also enables electronic assembly to develop towards high precision, high flexibility and high intelligence.
Disclosure of Invention
Aiming at the technical problems of low chip mounting precision, poor flexibility and the like in the prior art, the invention provides a chip mounting method and system guided by robot vision.
The invention solves the technical problems through the following technical scheme: a chip mounting method guided by robot vision comprises the following steps:
step (1): the installation and the photographing position of the camera are determined;
installing a first industrial camera on the hand of the robot, and respectively setting photographing positions P1 and P2 for acquiring clear chip placement area images and PCB placement area images above the chip placement area and the PCB placement area; installing a second industrial camera between the chip placing area and the PCB placing area, and setting a photographing position P3;
step (2): calibrating a camera;
calibrating the first industrial camera and the second industrial camera, acquiring internal parameters and external parameters of the industrial camera at P1 and P2 and the second industrial camera at P3, and determining the relation between an image coordinate system and a world coordinate system;
when determining the relationship between the image coordinate system and the world coordinate system, firstly obtaining the relationship between the image coordinate system and the camera coordinate system and the relationship between the camera coordinate system and the world coordinate system, and then obtaining the relationship between the image coordinate system and the world coordinate system; the distortion correction of the camera is completed according to the internal parameters; the camera intrinsic parameters are parameters related to the characteristics of the camera itself, such as the focal length, pixel size, etc. of the camera; the camera external parameters are the position, the rotation direction and the like of the camera;
and (3): calibrating the hand and the eye of the robot;
calibrating the hands and eyes of the robot to acquire the relation between a camera coordinate system and a robot coordinate system;
the hand refers to a robot, and the eyes refer to a first industrial camera and a second industrial camera;
and (4): acquiring images of the PCB and the chip for the first time;
firstly, completing the first image acquisition of the PCB at a photographing position P2, and then moving to a photographing position P1 to complete the first image acquisition of the chip;
and (5): image processing and coordinate transformation for the first time;
respectively processing the PCB primary image and the chip primary image to obtain a coordinate of a center point of a chip mounting area in the PCB primary image, an included angle between the chip mounting area and a horizontal axis and a coordinate of a center point of a chip in the chip primary image; then according to the relation between the image coordinate system and the world coordinate system and the relation between the camera coordinate system and the robot coordinate systemThe system calculates the coordinate (x) of the central point of the chip mounting area under the robot coordinate system0,y0) Coordinates (x) of the center point of the chip in the robot coordinate system1,y1) And coordinate (x)0,y0) And (x)1,y1) Transmitting to the robot;
the image acquisition at P1 and P2 are both completed by a first industrial camera, and the image acquisition at P3 is both completed by a second industrial camera;
and (6): acquiring a second image of the chip;
the robot controls its end effector to reach coordinates (x)1,y1) Sucking the chip and moving the chip to a photographing position P3 to complete the second image acquisition of the chip;
the precision of subsequent chip mounting is not high due to micro sliding in the chip suction process, and the image of the sucked chip is collected by setting a photographing position P3 to prepare for the rotation of an end effector of a subsequent robot;
and (7): angle compensation;
processing the secondary image of the chip to obtain the coordinate of the central point of the chip in the secondary image of the chip and the included angle theta between the chip and the horizontal axis2(ii) a Controlling the end effector to rotate theta at the photographing position P321After the angle compensation is finished, photographing again to obtain a third image of the rotated chip; processing the third image of the chip to obtain the coordinate of the center point of the chip in the third image of the chip, and calculating the coordinate (x) of the center point of the chip in the third image of the chip in the robot coordinate system according to the relationship between the image coordinate system and the world coordinate system and the relationship between the camera coordinate system and the robot coordinate system2,y2);
The sliding error generated in the process of sucking the chip or moving after sucking is corrected through angle compensation, and the chip mounting precision is improved; when the end effector rotates, the anticlockwise direction is taken as the positive direction;
and (8): completing coordinate compensation and mounting;
the end effector moves X in the X direction2-x0Moving in the Y directionMotion y2-y0And then vertically moving downwards to a chip mounting area on the PCB to place the chip, completing mounting, and entering the next procedure.
In the chip mounting process, all moving and rotating operations are moving and rotating of a hand, a wrist, an arm and each joint of the robot, and a robot base is fixed; since the chip mounting stage is nearly horizontal, the height coordinate Z can be set to a fixed value when the robot controls the end effector to pick up and mount the chip, and therefore, the coordinates transmitted to the robot only consider the X-axis and the Y-axis.
Further, the robot in the step (1) is a six-axis robot, and the first industrial camera is installed at a hand part of the six-axis robot;
the six-axis robot has higher degree of freedom and flexibility, the first industrial camera is arranged on the hand part, has wider photographing visual field and is easy to complete photographing along with the driving of the hand part; and due to the freedom and flexibility of the six-axis robot, chip mounting inside the cavity workpiece is realized, and flexible production is realized.
Further, in the step (1), a midpoint between the PCB placement area and the chip placement area is set as an area center point, the second industrial camera is installed between the PCB placement area and the area center point, and a lens of the second industrial camera faces upward;
the installation position of second industry camera is close to PCB and places the region, has shortened the distance between second industry camera and the PCB place the region, carries out angle compensation back in P3 department, moves to PCB from P3 again and places the region, because the distance between shortens, the distance that takes place the slip deviation in the removal process has also shortened to the problem of the subsides dress precision reduction that leads to because of the slip deviation in the removal process has been improved.
Further, in the step (1), a light source is respectively installed in the chip placing area, the PCB placing area, and an area between the chip placing area and the PCB placing area, so that an image with a better effect can be obtained.
Further, the camera calibration in the step (2) specifically comprises the following steps:
step (2.1): acquiring images at different angles;
placing the checkerboard calibration board on the plane of the PCB placement area, changing the position of the checkerboard calibration board, and taking 15-20 pictures from different angles at P2 by an industrial camera;
step (2.2): obtaining coordinates of the internal angle points;
according to the picture shot in the step (2.1), obtaining the image coordinates of the inner corner points of the chessboard-like calibration board by adopting Harris corner point detection algorithmu and v respectively represent the abscissa and ordinate of the inner corner point on the picture, and the world coordinates of the inner corner point of the tessellated calibration board are obtained according to the actual size of the tessellated calibration board designed during manufactureX, Y and Z respectively represent coordinates of the internal corner point on the abscissa, the ordinate and the height in the world coordinate system, S1,S2Representing a scale factor;
the origin of the world coordinate system is the angular point of the upper left corner of the checkerboard calibration board, and the size of each check in the calibration board is determined during design, so that the actual size of each inner angular point can be sequentially obtained according to the size of each check; the angular points detected by the Harris angular point detection algorithm are internal angular points (not including the angular points around), and the detection algorithm has small calculation amount and is not influenced by illumination and rotation;
step (2.3): determining the relation between an image coordinate system and a world coordinate system;
according to the image coordinates and world coordinates of the inner corner points, an inner parameter matrix A and an outer parameter matrix [ R, t ] of the first industrial camera are calculated]Determining the corresponding relation between the image coordinate system and the world coordinate system as
Wherein,r represents a 3 × 3 rotation matrixt represents a 3 × 1 translation vector(u0,v0) The principal point coordinates (the principal point is the intersection point of the optical axis of the camera and the image plane), alpha and beta respectively represent the normalized focal length on the X, Y axis, and gamma represents the inclination degree of the image plane; the first column element in R represents the angle of rotation about the X axis, the second column element represents the angle of rotation about the Y axis, and the third column element represents the angle of rotation about the Z axis; three elements p in tx、py、pzRespectively representing translation distances on an X axis, a Y axis and a Z axis;
0 and 1 in the internal parameter matrix Q have no specific meaning, are only increased for the convenience of calculation, and do not change the calculation result; inputting the image coordinates and world coordinates of the inner corner points by using an OpenCV or MATLAB tool box, and solving R and t;
step (2.4): calibrating a camera in a chip placement area;
repeating the steps (2.1) to (2.3) at the chip placement area P1 to complete the first industrial camera calibration at the photographing position P1;
step (2.5): calibrating a second industrial camera;
and in the area where the second industrial camera is located, controlling the end effector to suck the checkered calibration plate to reach the position P3, changing the position of the checkered calibration plate, taking 15-20 pictures at the position P3 from different angles, and repeating the steps (2.2) to (2.3) to finish the calibration of the second industrial camera at the position P3.
Further, the specific steps of calibrating the hands and eyes of the robot in the step (3) are as follows:
step (3.1): calibrating the hand and eye of the PCB placement area;
selecting three positions A1, B1 and C1 which are not on the same straight line on the plane of the PCB placement area, controlling the robot end effector to sequentially reach the three positions, completing the calibration of the first industrial camera at the three positions, and obtaining three external parameter matrixes which are respectively R1,t1],[R2,t2],[R3,t3](ii) a Obtaining description matrixes [ R ] corresponding to positions A1, B1 and C1 in a robot coordinate system on a robot demonstrator4,t4],[R5,t5],[R6,t6];
The description matrix is directly displayed on the robot demonstrator and is used for describing the positions and postures of the robot at the positions A1, B1 and C1;
wherein [ R ]c1,tc1]=[R1,t1]*[R2,t2]-1,[Rc1,tc1]Representing a transformation matrix between the camera coordinate systems at two positions a1 and B1; [ R ]c2,tc2]=[R2,t2]*[R3,t3]-1,[Rc2,tc2]Representing a transformation matrix between the camera coordinate systems at two positions B1 and C1;
[Re1,te1]=[R4,t4]×[R5,t5]-1,[Re1,te1]representing a transformation matrix between the robot coordinate systems at two positions a1 and B1; [ R ]e2,te2]=[R5,t5]×[R6,t6]-1,[Re2,te2]Representing a transformation matrix between the robot coordinate systems at two positions B1 and C1;
step (3.2): establishing a relational expression;
establishing the following relational expression according to the three external parameter matrixes and the description matrix obtained in the step (3.1):
obtaining a relation matrix [ R ] between the industrial camera and the end effector according to the relationx,tx],Rx、txRespectively representing a rotation matrix and a translation vector converted from a camera coordinate system to a robot coordinate system;
the camera coordinate system takes a camera optical center as an origin, and the robot coordinate system takes a robot base as an origin;
step (3.3): calibrating the hand and eye of the chip placement area;
repeating the steps (3.1) and (3.2) in the chip placement area to finish the hand-eye calibration of the chip placement area;
step (3.4): calibrating the hand and eye of a second industrial camera area;
in a second industrial camera area, the checkerboard-shaped calibration plate is supported, and the steps (3.1) and (3.2) are repeated to finish the hand-eye calibration of the area;
the supporting height of the chessboard-like calibration plate can reach the focusing height of the camera, and the calibration plate is positioned in the middle of the visual field of the second industrial camera; the second industrial camera area refers to an area that the second industrial camera can capture.
Further, in the step (5), the specific steps of obtaining the coordinates of the center point of the chip mounting area and the included angle between the chip mounting area and the horizontal axis include:
step (5.11) preprocessing the first image of the PCB, and removing noise points by adopting a bilateral filtering method;
the bilateral filtering method can well store the image edge information while removing noise;
and (5.12) carrying out binarization processing on the image by adopting a global fixed threshold method, wherein the binarization processing comprises the following steps:
wherein, (x, y) represents the coordinates of the pixel point, f1(x, y) represents a graying processing function, f1(x,y)=R×0.3+G×0.59+B×0.11,f2(x, y) represents the gradation value after the binarization process, and T represents the settingThe gray value of (a);
t is that the gray value is increased from dark to light in sequence, and the gray value with the best experimental effect is selected as a set gray value; r, G, B, representing R, G, B three channels of each pixel point, wherein the gray value of each pixel point is weighted according to a fixed proportion; the global fixed threshold method is simple in calculation and high in processing speed;
step (5.13) performing morphological closed operation processing on the image to smooth the edge area;
step (5.14) extracting the edge contour of the image processed in the step (5.13), and screening out the contour of a chip mounting area on the PCB primary image according to the area and the length of the edge contour;
and (5.15) calculating the coordinates of the central point of the rectangular area for mounting the chip and the included angle between the rectangular area and a horizontal axis according to the contour screened in the step (5.14), wherein the horizontal axis refers to the horizontal side of the image.
Further, in the step (5.15), the specific calculation steps of the coordinates of the central point and the included angle between the coordinates of the central point and the horizontal axis are as follows:
step (5.31) fitting and screening the minimum circumscribed rectangle of the outline;
step (5.32) extracts four vertices (m) of the minimum bounding rectangle0,n0),(m1,n1),(m2,n2),(m3,n3);
Step (5.33) center point coordinate (m)c,nc) Is calculated by the formula
The included angle theta with the horizontal axis is calculated by the formula
Further, in the step (5) or the step (7), the specific steps of obtaining the coordinates of the center point of the chip and the included angle between the chip and the horizontal axis include:
the chip image is a first image of the chip, a second image of the chip or a third image of the chip; the chip center point coordinate is the coordinate of the chip center point in the first image of the chip, the coordinate of the chip center point in the second image of the chip or the coordinate of the chip center point in the third image of the chip, and the included angle between the chip and the horizontal axis is the included angle between the chip and the horizontal axis in the second image of the chip;
step (5.21) preprocessing the chip image, and removing noise points by adopting a bilateral filtering method;
the chip image refers to a first image of the chip, a second image of the chip or a third image of the chip;
and (5.22) carrying out binarization processing on the image by adopting a maximum inter-class variance method (OTSU), and extracting an approximate area of the chip:
g=w1×w2×(u1-u2)2
wherein, MxN is the total pixel number on the chip image, N1The number of pixels with the gray value smaller than the set gray value, N2The number of pixels with gray values greater than the set gray value, w1U represents the proportion of the number of the pixel points in the chip area to the total number of the pixel points in the whole chip image1Mean value, w, representing the gray values of all pixels in the chip region2The ratio of the number of the pixel points in the region except the chip region to the total number of the pixel points in the whole chip image is expressed, u2Mean value, w, representing the gray values of all pixels in a region outside the chip region1+w2When the gray values of all the pixel points in the chip region are equal to 1, g represents the variance between the average value of the gray values of all the pixel points in the region outside the chip region and the average value of the gray values of all the pixel points in the region outside the chip region;
the set gray value is selected according to the experimental effect after the gray value is sequentially increased from dark to light; selecting an optimal threshold value by calculating g, realizing binarization processing of the image, traversing from a gray value of 0-255, and taking the gray value when g takes the maximum value as the optimal threshold value; the maximum inter-class variance method is simple to calculate and has low error rate;
step (5.23) morphological expansion processing is carried out on the image, and an interference area with small area is removed and the edge is smoothed;
step (5.24) extracting the edge contour of the image processed in the step (5.23), and screening out the contour of the chip according to the area and the length of the edge contour;
and (5.25) calculating the coordinates of the central point of the chip and the included angle between the chip and the horizontal axis according to the contour screened in the step (5.24).
Further, a chip mounting system guided by robot vision comprises a robot and a chip mounting workbench; the robot comprises a robot body, a control system and a vision system; the control system is in communication connection with the vision system; the chip mounting workbench is provided with a chip placing area and a PCB placing area, a second industrial camera is arranged between the chip placing area and the PCB placing area, and the second industrial camera is in communication connection with the vision system;
the vision system comprises a first industrial camera, an image processing unit and an image storage unit; the first industrial camera is mounted on a hand of the robot;
the image processing unit is used for processing the image acquired by the first industrial camera and the image acquired by the second industrial camera by using the chip mounting method guided by the robot vision;
the image storage unit is used for storing data and information processed by the vision system;
the control system is used for controlling the robot to execute corresponding actions according to the processing result of the vision system by utilizing the chip mounting method guided by the robot vision.
Has the advantages that:
according to the chip mounting method and system guided by the robot vision, the second industrial camera is additionally arranged in the PCB placing area and the chip placing area, the robot absorbs the chip and moves to the second industrial camera, the image of the chip after absorption is shot, the angle difference between the included angle between the chip mounting area and the horizontal axis and the included angle between the chip after absorption and the horizontal axis is compared, the end effector rotates at the position P3 to perform angle compensation, the compensation is performed on the X axis and the Y axis, the sliding error generated in the chip absorption or chip absorption movement process is improved, and the mounting precision is improved; meanwhile, the six-axis robot is adopted, so that the six-axis robot has better degree of freedom and flexibility, chip mounting inside the cavity workpiece is realized, and flexible production is realized; in addition, the first industrial camera is arranged at the hand of the robot and is easier to move, so that the position-based visual control method has higher flexibility and wider application range;
the invention takes photos for four times to improve the mounting precision, and takes photos for the PCB to obtain the PCB and the accurate position of the chip mounting area on the PCB, so that the chip after being absorbed can be accurately placed in the chip mounting area; the chip is photographed for three times, and the accurate position of the chip is obtained by photographing for the first time, so that the robot can control the end effector to accurately suck the chip; the second photographing is completed by the second industrial camera, the position or the angle of the chip can be changed in the process that the robot sucks the chip and moves to the second industrial camera, the coordinate of the center point of the chip after the change and the included angle between the center point of the chip and the horizontal axis are determined through the second photographing, the compensation of the change of the position and the angle of the chip is realized by controlling the rotation of the end effector, and the angle correction is realized; the third photographing is also finished by the second industrial camera, the compensation values in the X and Y directions after the angle correction are obtained, the compensation in the X and Y directions is finished, and the angle value of the chip is not changed in the compensation; the method of the invention considers the position or angle change possibly generated in the chip absorbing and moving process, and greatly improves the mounting accuracy.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only one embodiment of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic view of the mounting of a robotic vision-guided die attach system of the present invention;
FIG. 2 is a flow chart of a robotic vision-guided chip placement method of the present invention;
FIG. 3 is a flow chart of the present invention for obtaining coordinates of a center point of a chip mounting area and an angle between the chip mounting area and a horizontal axis;
FIG. 4 is a flow chart of the present invention for obtaining coordinates of a center point of a chip and an included angle between the chip and a horizontal axis;
FIG. 5 is an image produced by a first image processing process of the PCB of the present invention;
the method comprises the following steps of (a) obtaining a grayscale image of a PCB, (b) obtaining a PCB image after bilateral filtering, (c) obtaining a PCB image after binarization processing, (d) obtaining a PCB image after closed operation processing, (e) obtaining a PCB effect image after edge extraction, and (f) obtaining a PCB effect image after edge screening and rectangle fitting;
FIG. 6 is a graph showing the result of the coordinates of the center point of the chip mounting area on the PCB and the angle between the center point and the horizontal axis according to the present invention;
FIG. 7 is an image resulting from the chip image processing process of the present invention;
the method comprises the following steps of (a) obtaining a gray level image of a chip, (b) obtaining a chip image after bilateral filtering, (c) obtaining a chip effect image after binarization processing, (d) obtaining a chip effect image after image expansion processing, (e) obtaining a chip effect image after edge extraction, and (f) obtaining a chip effect image after edge screening and rectangle fitting;
FIG. 8 is a graph showing the result of the coordinates of the center point of the chip and the angle with the horizontal axis according to the present invention;
FIG. 9 is a schematic structural view of a checkerboard shape calibration plate for use in the present invention;
the reference number indicates, 1-six axis robot, 2-first industrial camera, 3-second industrial camera, 4-PCB placement area light source, 5-chip placement area light source, 6-second industrial camera coaxial light source, 7-chip, 8-PCB, 9-workbench, 10-sucker, 11-inner corner point.
Detailed Description
The technical solutions in the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and fig. 2, the chip mounting method using robot vision guidance provided by the present invention includes the following steps:
step (1): the installation and the photographing position of the camera are determined;
the first industrial camera is installed on the hand of the six-axis robot, moves along with the hand, has a wider photographing visual field, is used for first-time image acquisition of the chip and image acquisition of the PCB, moves between the PCB placement area and the chip placement area, reduces the using number of the cameras, and saves cost. The robots described below are all six-axis robots.
After the first industrial camera is installed, photographing positions P1 and P2 for acquiring clear chip placing area images and PCB placing area images are respectively arranged above the chip placing area and the PCB placing area, and robot teaching photographing positions P1 and P2; installing a second industrial camera between the chip placing area and the PCB placing area, setting a photographing position P3, and teaching the photographing position P3 by a robot; and the second industrial camera is used for acquiring a second image and a third image of the chip.
The photographing positions P1 and P2 are not only required to be within the photographing visual field range of the industrial camera, but also required to be at a certain height, so that focusing can be completed; the second industrial camera is fixedly installed and cannot move, the chip can be sucked by the robot control end effector (a sucker) to move to the visual field range of the second industrial camera, the height focusing is adjusted, and the terminal coordinate displayed on the robot demonstrator is P3.
The end effector of the robot may take many forms including a suction cup, a gripper, etc., and in this embodiment, the end effector is a suction cup.
In order to improve the quality of image acquisition, a light source is respectively installed in the chip placing area, the PCB placing area and the second industrial camera installing area.
Step (2): calibrating a camera;
calibrating a first industrial camera and a second industrial camera, acquiring internal parameters and external parameters of the first industrial camera and the second industrial camera, and determining the relation between an image coordinate system and a world coordinate system; and completing the distortion correction of the camera according to the internal parameters.
The calibration of the first industrial camera comprises the calibration of the first industrial camera at P1 and P2, the calibration of the second industrial camera is the calibration of the second industrial camera at P3, and the camera internal and external parameters at the three positions are respectively obtained, and the specific steps are as follows:
step (2.1): acquiring images at different angles;
the checkerboard calibration board is placed on the plane of the PCB placement area, the position of the checkerboard calibration board is changed (within the visual field range of the industrial camera), and 15-20 calibration board pictures are shot at P2 from different angles. The tessellated calibration plate is shown in figure 9.
Step (2.2): obtaining coordinates of the internal angle points;
according to the picture shot in the step (2.1), obtaining the image coordinates of the inner corner points of the chessboard-like calibration board by adopting Harris corner point detection algorithmu and v respectively represent the abscissa and ordinate of the inner corner point on the picture, and the world coordinates of the inner corner point of the tessellated calibration board are obtained according to the actual size of the tessellated calibration board designed during manufactureX, Y and Z respectively represent coordinates of the internal corner point on the abscissa, the ordinate and the height in the world coordinate system, S1,S2Representing the scale factor.
The scale coefficient is introduced to facilitate calculation, the coordinate value is not changed, and any non-zero number which is convenient for simplifying calculation is selected during calculation; the origin of the world coordinate system is the angular point of the upper left corner of the checkerboard calibration board, and the size of each check in the calibration board is determined during design, so that the actual size of each inner angular point can be sequentially obtained according to the size of each check; the angular points detected by the Harris angular point detection algorithm are internal angular points (not including the angular points around), and the detection algorithm is small in calculation amount and is not influenced by illumination and rotation.
Step (2.3): determining the relation between an image coordinate system and a world coordinate system;
according to the image coordinates and world coordinates of the inner corner points in the step (2.2), calculating an inner parameter matrix A and an outer parameter matrix [ R, t ] of the first industrial camera]Determining the corresponding relation between the image coordinate system and the world coordinate system as
Wherein,r represents a 3 × 3 rotation matrixt represents a 3 × 1 translation vector(u0,v0) The principal point coordinates (the principal point is the intersection point of the optical axis of the camera and the image plane), alpha and beta respectively represent the normalized focal length on the X, Y axis, and gamma represents the inclination degree of the image plane; the first column element in R represents the angle of rotation about the X axis, the second column element represents the angle of rotation about the Y axis, and the third column element represents the angle of rotation about the Z axis; three elements p in tx、py、pzRespectively representing translation distances on an X axis, a Y axis and a Z axis;
0 and 1 in the internal parameter matrix Q have no specific meaning, are only added for the convenience of calculation, and do not change the calculation result. The OpenCV vision library already has corresponding functions for calculating internal and external parameters, and the corresponding internal parameter Q and the corresponding external parameter R, t can be obtained only by taking the image coordinates and the world coordinates of the internal corner points as parameters of the functions.
Step (2.4): calibrating a camera in a chip placement area;
since the external parameters of the camera change after the camera position is changed, the above steps (2.1) to (2.3) are repeated at the chip placement region P1, the first industrial camera calibration at the photographing position P1 is completed, the internal and external parameters of the industrial camera at P1 are obtained, and the external parameters are obtained mainly at P1 because the change of the camera position has little influence on the internal parameters.
Step (2.5): calibrating a second industrial camera;
camera manufacturing process variations can cause the intrinsic and extrinsic parameters of each camera to vary and the extrinsic parameters to vary at different locations, so the cameras are also calibrated at P3. And (3) controlling the suction disc to suck the checkered calibration plate to reach the position P3 in the area where the second industrial camera is located, changing the position of the checkered calibration plate (completed by operating the manipulator), shooting 15-20 pictures at the position P3 from different angles, repeating the steps (2.2) to (2.3), and completing the calibration of the second industrial camera at the position P3 to obtain the internal parameters and the external parameters of the second industrial camera.
And (3): calibrating the hand and the eye of the robot;
the method comprises the following steps of calibrating hands and eyes of a robot to obtain the relation between a camera coordinate system and a robot coordinate system:
step (3.1): calibrating the hand and eye of the PCB placement area;
on the plane where the PCB placing area is located, three positions A1, B1 and C1 which are not on the same straight line are selected, and in order to guarantee that the obtained solution is not a plurality of solutions, the three points are required to be guaranteed to be not on the same straight line. Controlling the robot sucker to sequentially reach the three positions, and completing the calibration of the first industrial camera at the three positions by adopting the method in the step (2) to obtain three external parameter matrixes respectivelyIs [ R ]1,t1],[R2,t2],[R3,t3](ii) a A description matrix [ R ] corresponding to the positions A1, B1 and C1 under the coordinate system of the robot is obtained on the robot demonstrator4,t4],[R5,t5],[R6,t6];
The description matrix is directly displayed on the robot demonstrator and is used for describing the positions and postures of the robot at the positions A1, B1 and C1;
wherein [ R ]c1,tc1]=[R1,t1]*[R2,t2]-1,[Rc1,tc1]Representing a transformation matrix between the camera coordinate systems at two positions a1 and B1; [ R ]c2,tc2]=[R2,t2]*[R3,t3]-1,[Rc2,tc2]Representing a transformation matrix between the camera coordinate systems at two positions B1 and C1;
[Re1,te1]=[R4,t4]×[R5,t5]-1,[Re1,te1]representing a transformation matrix between the robot coordinate systems at two positions a1 and B1; [ R ]e2,te2]=[R5,t5]×[R6,t6]-1,[Re2,te2]Representing the transformation matrix between the robot coordinate systems at two positions B1 and C1.
Step (3.2): establishing a relational expression;
the process of calibrating the robot hand-eye is actually a process of solving AX ═ XB, and the following relational expression is established according to the three external parameter matrices and the description matrix obtained in step (3.1):
obtaining a relation matrix [ R ] between the industrial camera and the sucker according to the relation formula by using an MATLAB tool boxx,tx],Rx、txRespectively represent phasesAnd converting the machine coordinate system into a rotation matrix and a translation vector of the robot coordinate system.
Step (3.3): calibrating the hand and eye of the chip placement area;
in order to further ensure the precision, the steps (3.1) and (3.2) are repeated in the chip placing area, the hand-eye calibration of the chip placing area is completed, and the relation between the camera coordinate system of the chip placing area and the robot coordinate system is obtained.
Step (3.4): calibrating the hand and eye of a second industrial camera area;
in the second industrial camera area, because the lens of the second industrial camera is upward, the checkerboard calibration plate needs to be supported by the bracket for photographing, and the steps (3.1) and (3.2) are repeated to finish the hand-eye calibration of the second industrial camera in the area, so that a relation matrix between the coordinate system of the second industrial camera and the coordinate system of the robot is obtained;
the supporting height of the chessboard-like calibration plate can reach the focusing height of the camera, and the calibration plate is positioned in the middle of the visual field of the second industrial camera; the second industrial camera area refers to an area that the second industrial camera can capture.
The calibration of the camera and the calibration of the hands and eyes of the robot are to complete the distortion correction of the camera and the conversion between coordinate systems, so that the aim of more accurate mounting is fulfilled, and the calibration is only needed to be carried out once before formal use. When the mounting work is normally performed, the robot firstly moves to the photographing position P2 to complete the image acquisition of the PCB, and it is noted that the PCB reaching the mounting area has completed the previous processes such as gluing and the like, and then moves to the photographing position P1 to complete the image acquisition of the chip. Because the chip is sucked after the photographing process, the PCB image is firstly collected and then the PCB is moved to the photographing position of the chip for image collection.
And (4): acquiring images of the PCB and the chip for the first time;
first image acquisition of the PCB is completed at a photographing position P2, and then the chip is moved to a photographing position P1 to complete first image acquisition of the chip.
And (5): image processing and coordinate transformation for the first time;
respectively imaging PCB for the first time andprocessing the first image of the chip to obtain the coordinates of the center point of the chip mounting area in the first image of the PCB, the included angle between the chip mounting area and the horizontal axis and the coordinates of the center point of the chip in the first image of the chip; then, according to the relation between the image coordinate system and the world coordinate system and the relation between the camera coordinate system and the robot coordinate system, the coordinate (x) of the central point of the chip mounting area under the robot coordinate system is calculated0,y0) Coordinates (x) of the center point of the chip in the robot coordinate system1,y1) And coordinate (x)0,y0) And (x)1,y1) And transmitted to the robot.
As shown in fig. 3, the specific steps of obtaining the coordinates of the center point of the chip mounting area and the included angle between the chip mounting area and the horizontal axis include:
step (5.11) reads the first image of the PCB (as shown in fig. 5 (a)), because the camera is a black and white camera, all the images are grayscale images, which is convenient for the subsequent processing work. Preprocessing the first image of the PCB, and removing noise points by adopting a bilateral filtering method, as shown in fig. 5 (b); the bilateral filtering method can well store the image edge information while removing noise.
The bilateral filter function is a multiplication of a Gaussian function related to a spatial distance and a Gaussian function related to a gray scale distance, wherein the spatial distance refers to the Euclidean distance between a current point and a central point, and the mathematical form isIn the formula (x)i,yi) As the current position, (x)c,yc) As the position of the center point, σ1Is the spatial domain standard deviation. The gray distance represents the absolute value of the difference between the current point gray and the center point gray in the mathematical form ofWherein g (x)i,yi) Is the gray value of the current point, g (x)c,yc) Is the central point gray value, σ2Is the value range standard deviation. Through experimental discovery, the method adopts halfDiameter of 10, value domain standard deviation σ140, standard deviation σ of spatial domain2The bilateral filter of 3 removes noise with minimal impact on the edges and has the best effect.
Step (5.12) of performing binarization processing on the image by using a global fixed threshold method, setting a gray value T, wherein an experiment shows that the best effect is obtained when T is 160, setting the pixel points greater than T as 255 and the pixel points less than T as 0, namely:
wherein, (x, y) represents the coordinates of the pixel point, f1(x, y) represents a graying processing function, f1(x,y)=R×0.3+G×0.59+B×0.11,f2(x, y) represents the gradation value after the binarization processing; r, G, B, representing R, G, B three channels of each pixel point, wherein the gray value of each pixel point is weighted according to a fixed proportion; the global fixed threshold method is simple in calculation and high in processing speed; after the binarization process, the approximate position of the chip mounting area on the PCB can be extracted, and the processing effect is shown in fig. 5 (c).
Step (5.13) performs morphological closing operation processing on the image to smooth the edge area, and the processing effect is shown in fig. 5 (d).
Step (5.14) extracting the edge contour of the image processed in step (5.13), as shown in fig. 5(e), there may be some interference points, and there is more than one extracted edge, but because the size of the chip mounting area on the PCB is determined, the contour of the chip mounting area can be screened according to the area and length of the edge contour, and in the experiment, the screening is performed according to the contour length of more than 400 and less than 600, and the contour area of more than 10000 and less than 15000.
Step (5.15) calculates coordinates of a center point of the chip-mounted rectangular area and an angle between the rectangular area and a horizontal axis, where the horizontal axis refers to a horizontal side of the image, according to the contour screened in step (5.14), as shown in fig. 5(f) and fig. 6.
In the step (5.15), the specific calculation steps of the coordinates of the central point and the included angle between the coordinates of the central point and the horizontal axis are as follows:
step (5.31) fitting and screening the minimum circumscribed rectangle of the outline;
step (5.32) extracts four vertices (m) of the minimum bounding rectangle0,n0),(m1,n1),(m2,n2),(m3,n3);
Step (5.33) center point coordinate (m)c,nc) Is calculated by the formula
The included angle theta with the horizontal axis is calculated by the formula
In the whole mounting process, the chip is photographed for three times, three images are correspondingly provided, namely a first image of the chip, a second image of the chip and a third image of the chip, the three images are respectively obtained in the steps (4), (6) and (7), the processing processes of the three images of the chip are the same, so that the three images are uniformly described as the processing of the chip image, the coordinate of the center point of the chip is the coordinate of the center point of the chip in the first image of the chip, the coordinate of the center point of the chip in the second image of the chip or the coordinate of the center point of the chip in the third image of the chip, and the included angle between the chip and the horizontal axis is the included angle between the chip in the second image of the chip and the;
as shown in fig. 4, the specific steps of obtaining the coordinates of the center point of the chip and the included angle between the chip and the horizontal axis are as follows:
and (5.21) reading the chip gray image (as shown in fig. 7(a)), preprocessing the chip gray image, and removing noise by adopting a bilateral filtering method, wherein the processing effect is shown in fig. 7 (b).
And (5.22) carrying out binarization processing on the image by adopting a maximum inter-class variance method (OTSU), and extracting an approximate area of the chip:
g=w1×w2×(u1-u2)2
wherein, MxN is the total pixel number on the chip image, N1The number of pixels with the gray value smaller than the set gray value, N2The number of pixels with gray values greater than the set gray value, w1U represents the proportion of the number of the pixel points in the chip area to the total number of the pixel points in the whole chip image1Mean value, w, representing the gray values of all pixels in the chip region2The ratio of the number of the pixel points in the region except the chip region to the total number of the pixel points in the whole chip image is expressed, u2Mean value, w, representing the gray values of all pixels in a region outside the chip region1+w2When the gray values of all the pixel points in the chip region are equal to 1, g represents the variance between the average value of the gray values of all the pixel points in the region outside the chip region and the average value of the gray values of all the pixel points in the region outside the chip region;
the set gray value is selected according to the experimental effect after the gray value is sequentially increased from dark to light; the OTSU selects an optimal threshold by calculating g to implement binarization processing of the image, traversal is performed from a gray value of 0 to 255, the gray value when g takes the maximum value is the optimal threshold, and the processing result is shown in fig. 7 (c).
And (5.23) performing morphological expansion processing on the image, removing the interference region with small area and smoothing the edge, wherein the processing result is shown in fig. 7 (d).
And (5.24) extracting the edge contour of the image processed in the step (5.23), and screening out the contour of the chip according to the area and the length of the edge contour, as shown in fig. 7 (e).
And (5.25) calculating the coordinates of the central point of the chip and the included angle between the chip and the horizontal axis according to the contour screened in the step (5.24) (as shown in fig. 7(f) and 8), wherein the calculation process is the same as the calculation process of the coordinates of the central point of the chip mounting area on the PCB and the included angle between the chip and the horizontal axis, which is detailed in the steps (5.31) - (5.33).
And (6): acquiring a second image of the chip;
robot control suction cup to coordinate (x)1,y1) Sucking the chip and moving the chip to a photographing position P3 to complete the second image acquisition of the chip; because the chip possibly slides in the suction process, the precision of the subsequent chip is not high, and therefore, the photographing position P3 is set to collect the image of the sucked chip, and preparation is made for the subsequent robot to control the rotation of the sucker to perform angle compensation.
And (7): angle compensation;
processing the second image of the chip to obtain the second image coordinate of the central point of the chip and the included angle theta between the chip and the horizontal axis2(ii) a Controlling the sucker to rotate theta at the photographing position P321After the angle compensation is finished, photographing again to obtain a third image of the rotated chip; processing the third image of the chip to obtain the coordinate of the center point of the chip in the third image of the chip, and calculating the coordinate (x) of the center point of the chip in the third image of the chip in the robot coordinate system according to the relationship between the image coordinate system and the world coordinate system and the relationship between the camera coordinate system and the robot coordinate system2,y2);
The sliding error generated in the process of sucking the chip or moving after sucking is improved through angle compensation, and the chip mounting precision is improved; when the robot rotates, the counterclockwise direction is the positive direction.
And (8): completing coordinate compensation and mounting;
the compensation in the X and Y directions is obtained by the step (7), and the sucker is controlled to move in the X direction by X2-x0Moving in the Y direction Y2-y0And then (in the process, the included angle between the chip and the horizontal shaft is not changed), vertically and downwards moving to a chip mounting area on the PCB to place the chip, completing mounting, and entering the next heating process.
As shown in fig. 1, a robot vision-guided die attachment system includes a robot and a die attachment table; the robot comprises a robot body, a control system and a vision system; the control system is in communication connection with the vision system; the chip mounting workbench is provided with a chip placing area and a PCB placing area, a second industrial camera is arranged between the chip placing area and the PCB placing area, and the second industrial camera is in communication connection with the vision system;
the vision system comprises a first industrial camera, an image processing unit and an image storage unit; the first industrial camera is mounted on a hand of the robot;
the image processing unit processes the image acquired by the first industrial camera and the image acquired by the second industrial camera by using the chip mounting method guided by the robot vision;
the image storage unit is used for storing data and information processed by the vision system;
the control system controls the robot to execute corresponding actions according to the processing result of the vision system by utilizing the chip mounting method guided by the robot vision.
The above-mentioned embodiments are further described in detail in the technical field, background, objects, schemes and advantages of the present invention, and it should be understood that the embodiments are only preferred embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A chip mounting method guided by robot vision is characterized by comprising the following steps:
step (1): the installation and the photographing position of the camera are determined;
installing a first industrial camera on the hand of the robot, and respectively setting photographing positions P1 and P2 for acquiring clear chip placement area images and PCB placement area images above the chip placement area and the PCB placement area; installing a second industrial camera between the chip placing area and the PCB placing area, and setting a photographing position P3;
step (2): calibrating a camera;
calibrating the first industrial camera and the second industrial camera, acquiring internal parameters and external parameters of the first industrial camera at P1 and P2 and the second industrial camera at P3, and determining the relation between an image coordinate system and a world coordinate system;
and (3): calibrating the hand and the eye of the robot;
calibrating the hands and eyes of the robot to acquire the relation between a camera coordinate system and a robot coordinate system;
and (4): acquiring images of the PCB and the chip for the first time;
firstly, completing the first image acquisition of the PCB at a photographing position P2, and then moving to a photographing position P1 to complete the first image acquisition of the chip;
and (5): image processing and coordinate transformation for the first time;
respectively processing the PCB primary image and the chip primary image to obtain a coordinate of a center point of a chip mounting area in the PCB primary image, an included angle between the chip mounting area and a horizontal axis and a coordinate of a center point of a chip in the chip primary image; and calculating the coordinate (x) of the central point of the chip mounting area in the robot coordinate system according to the relationship between the image coordinate system and the world coordinate system and the relationship between the camera coordinate system and the robot coordinate system0,y0) Coordinates (x) of the center point of the chip in the robot coordinate system1,y1) And coordinate (x)0,y0) And (x)1,y1) Transmitting to the robot;
and (6): acquiring a second image of the chip;
the robot controls its end effector to reach coordinates (x)1,y1) Sucking the chip and moving the chip to a photographing position P3 to complete the second image acquisition of the chip;
and (7): angle compensation;
processing the secondary image of the chip to obtain the coordinate of the central point of the chip in the secondary image of the chip and the included angle theta between the chip and the horizontal axis2(ii) a Controlling the end effector to rotate theta at the photographing position P321After the angle compensation is finished, photographing again to obtain a third image of the rotated chip; third time image of chipProcessing to obtain the coordinates of the chip center point in the third image of the chip, and calculating the coordinates (x) of the chip center point in the third image of the chip in the robot coordinate system according to the relationship between the image coordinate system and the world coordinate system and the relationship between the camera coordinate system and the robot coordinate system2,y2);
And (8): completing coordinate compensation and mounting;
the end effector moves X in the X direction2-x0Moving in the Y direction Y2-y0And then vertically moving downwards to a chip mounting area on the PCB to place the chip, completing mounting, and entering the next procedure.
2. The robot-vision-guided chip mounting method according to claim 1, wherein the robot in the step (1) is a six-axis robot, and the first industrial camera is mounted on a hand of the six-axis robot.
3. The robot-vision-guided chip mounting method according to claim 1, wherein in the step (1), a midpoint between the PCB placement area and the chip placement area is defined as an area center point, the second industrial camera is installed between the PCB placement area and the area center point, and a lens of the second industrial camera faces upward.
4. A robot vision-guided chip mounting method according to claim 1, wherein in the step (1), a light source is installed in each of the chip placement area, the PCB placement area, and an area between the chip placement area and the PCB placement area.
5. The robot-vision-guided chip mounting method according to claim 1, wherein the camera calibration in step (2) comprises the following specific steps:
step (2.1): acquiring images at different angles;
placing the chessboard-like calibration board on the plane of the PCB placement area, changing the position of the chessboard-like calibration board, and taking 15-20 pictures at P2 from different angles;
step (2.2): obtaining coordinates of the internal angle points;
according to the picture shot in the step (2.1), obtaining the image coordinates of the inner corner points of the chessboard-like calibration board by adopting Harris corner point detection algorithmu and v respectively represent the abscissa and ordinate of the inner corner point on the picture, and the world coordinates of the inner corner point of the tessellated calibration board are obtained according to the actual size of the tessellated calibration board designed during manufactureX, Y and Z respectively represent coordinates of the internal corner point on the abscissa, the ordinate and the height in the world coordinate system, S1,S2Representing a scale factor;
step (2.3): determining the relation between an image coordinate system and a world coordinate system;
according to the image coordinates and world coordinates of the inner corner points, an inner parameter matrix A and an outer parameter matrix [ R, t ] of the camera are calculated]Determining the corresponding relation between the image coordinate system and the world coordinate system as
Wherein,r represents a 3 × 3 rotation matrixt represents a 3 × 1 translation vector(u0,v0) As principal point coordinates, α, β respectively represent normalized focal lengths on the X, Y axis, and γ represents the degree of inclination of the image plane; first column element in RRepresenting an angle of rotation about the X-axis, a second column element representing an angle of rotation about the Y-axis, and a third column element representing an angle of rotation about the Z-axis; three elements p in tx、py、pzRespectively representing translation distances on an X axis, a Y axis and a Z axis;
step (2.4): calibrating a camera in a chip placement area;
repeating the steps (2.1) to (2.3) at the chip placement area P1 to complete the first industrial camera calibration at the photographing position P1;
step (2.5): calibrating a second industrial camera;
and in the area where the second industrial camera is located, controlling the end effector to suck the checkered calibration plate to reach the position P3, changing the position of the checkered calibration plate, taking 15-20 pictures at the position P3 from different angles, and repeating the steps (2.2) to (2.3) to finish the calibration of the second industrial camera at the position P3.
6. The method for chip mounting guided by robot vision according to claim 5, wherein the step (3) of calibrating the robot by hand and eye specifically comprises the steps of:
step (3.1): calibrating the hand and eye of the PCB placement area;
selecting three positions A1, B1 and C1 which are not on the same straight line on the plane of the PCB placement area, controlling the robot end effector to sequentially reach the three positions, completing the calibration of the first industrial camera at the three positions, and obtaining three external parameter matrixes which are respectively R1,t1],[R2,t2],[R3,t3](ii) a Obtaining description matrixes [ R ] corresponding to positions A1, B1 and C1 in a robot coordinate system on a robot demonstrator4,t4],[R5,t5],[R6,t6];
Wherein [ R ]c1,tc1]=[R1,t1]*[R2,t2]-1,[Rc1,tc1]Representing a transformation matrix between the camera coordinate systems at two positions a1 and B1; [ R ]c2,tc2]=[R2,t2]*[R3,t3]-1,[Rc2,tc2]Representing a transformation matrix between the camera coordinate systems at two positions B1 and C1;
[Re1,te1]=[R4,t4]×[R5,t5]-1,[Re1,te1]representing a transformation matrix between the robot coordinate systems at two positions a1 and B1; [ R ]e2,te2]=[R5,t5]×[R6,t6]-1,[Re2,te2]Representing a transformation matrix between the robot coordinate systems at two positions B1 and C1;
step (3.2): establishing a relational expression;
establishing the following relational expression according to the three external parameter matrixes and the description matrix obtained in the step (3.1):
obtaining a relation matrix [ R ] between the industrial camera and the end effector according to the relationx,tx],Rx、txRespectively representing a rotation matrix and a translation vector converted from a camera coordinate system to a robot coordinate system;
step (3.3): calibrating the hand and eye of the chip placement area;
repeating the steps (3.1) and (3.2) in the chip placement area to finish the hand-eye calibration of the chip placement area;
step (3.4): calibrating the hand and eye of a second industrial camera area;
in a second industrial camera area, the checkerboard-shaped calibration board is erected, and the steps (3.1) and (3.2) are repeated to complete the hand-eye calibration of the area.
7. The robot vision-guided chip mounting method according to claim 1, wherein in the step (5), the specific steps of obtaining the coordinates of the center point of the chip mounting area and the included angle between the chip mounting area and the horizontal axis include:
step (5.11) preprocessing the first image of the PCB, and removing noise points by adopting a bilateral filtering method;
and (5.12) carrying out binarization processing on the image by adopting a global fixed threshold method, wherein the binarization processing comprises the following steps:
wherein, (x, y) represents the coordinates of the pixel point, f1(x, y) represents a graying processing function, f1(x,y)=R×0.3+G×0.59+B×0.11,f2(x, y) represents the gray value after the binarization processing, and T represents the set gray value;
step (5.13) performing morphological closed operation processing on the image to smooth the edge area;
step (5.14) extracting the edge contour of the image processed in the step (5.13), and screening out the contour of a chip mounting area on the PCB primary image according to the area and the length of the edge contour;
and (5.15) calculating the coordinates of the central point of the chip mounting area and the included angle between the area and the horizontal axis according to the contour screened in the step (5.14).
8. The robot-vision-guided chip mounting method according to claim 7, wherein in the step (5.15), the specific calculation steps of the coordinates of the center point and the included angle with the horizontal axis are as follows:
step (5.31) fitting and screening the minimum circumscribed rectangle of the outline;
step (5.32) extracts four vertices (m) of the minimum bounding rectangle0,n0),(m1,n1),(m2,n2),(m3,n3);
Step (5.33) center point coordinate (m)c,nc) Is calculated by the formula
The included angle theta with the horizontal axis is calculated by the formula
9. The chip mounting method guided by robot vision according to claim 1, wherein in step (5) or step (7), the specific steps of obtaining the coordinates of the center point of the chip and the included angle between the chip and the horizontal axis are as follows:
the chip image is a first image of the chip, a second image of the chip or a third image of the chip; the chip center point coordinate is the coordinate of the chip center point in the first image of the chip, the coordinate of the chip center point in the second image of the chip or the coordinate of the chip center point in the third image of the chip, and the included angle between the chip and the horizontal axis is the included angle between the chip and the horizontal axis in the second image of the chip;
step (5.21) preprocessing the chip image, and removing noise points by adopting a bilateral filtering method;
and (5.22) carrying out binarization processing on the image by adopting a maximum inter-class variance method, and extracting a chip region:
g=w1×w2×(u1-u2)2
wherein, MxN is the total pixel number on the chip image, N1The number of pixels with the gray value smaller than the set gray value, N2The number of pixels with gray values greater than the set gray value, w1U represents the proportion of the number of the pixel points in the chip area to the total number of the pixel points in the whole chip image1Mean value, w, representing the gray values of all pixels in the chip region2Indicating regions outside the chip areaThe proportion of the number of the domain pixels to the total number of the pixels of the whole chip image, u2Mean value, w, representing the gray values of all pixels in a region outside the chip region1+w2When the gray values of all the pixel points in the chip region are equal to 1, g represents the variance between the average value of the gray values of all the pixel points in the region outside the chip region and the average value of the gray values of all the pixel points in the region outside the chip region;
step (5.23) performing morphological expansion processing on the image, removing an interference area and smoothing an edge;
step (5.24) extracting the edge contour of the image processed in the step (5.23), and screening out the contour of the chip according to the area and the length of the edge contour;
and (5.25) calculating the coordinates of the central point of the chip and the included angle between the chip and the horizontal axis according to the contour screened in the step (5.24).
10. A chip mounting system guided by robot vision is characterized by comprising a robot and a chip mounting workbench; the robot comprises a robot body, a control system and a vision system; the control system is in communication connection with the vision system; the chip mounting workbench is provided with a chip placing area and a PCB placing area, a second industrial camera is arranged between the chip placing area and the PCB placing area, and the second industrial camera is in communication connection with the vision system;
the vision system comprises a first industrial camera, an image processing unit and an image storage unit; the first industrial camera is mounted on a hand of the robot;
the image processing unit processes the image collected by the first industrial camera and the image collected by the second industrial camera by using the robot vision-guided chip mounting method according to any one of claims 1 to 9;
the image storage unit is used for storing data and information processed by the vision system;
the control system, which controls the robot to execute corresponding actions according to the processing result of the vision system by using the chip mounting method guided by the robot vision as claimed in any one of claims 1-9.
CN201810582133.2A 2018-06-07 2018-06-07 A kind of chip attachment method and system of robot vision guidance Active CN108766894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810582133.2A CN108766894B (en) 2018-06-07 2018-06-07 A kind of chip attachment method and system of robot vision guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810582133.2A CN108766894B (en) 2018-06-07 2018-06-07 A kind of chip attachment method and system of robot vision guidance

Publications (2)

Publication Number Publication Date
CN108766894A CN108766894A (en) 2018-11-06
CN108766894B true CN108766894B (en) 2019-11-05

Family

ID=63999434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810582133.2A Active CN108766894B (en) 2018-06-07 2018-06-07 A kind of chip attachment method and system of robot vision guidance

Country Status (1)

Country Link
CN (1) CN108766894B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109781164B (en) * 2018-12-28 2021-02-05 长沙长泰机器人有限公司 Static calibration method of line laser sensor
CN109719734B (en) * 2019-03-12 2021-12-21 湖南大学 Robot vision-guided mobile phone flashlight assembling system and assembling method
CN110231036B (en) * 2019-07-19 2020-11-24 广东博智林机器人有限公司 Robot positioning device and method based on cross laser and machine vision
CN110281069B (en) * 2019-07-23 2024-05-03 琦星智能科技股份有限公司 Irregular product processing equipment based on industrial robot vision and vision control thereof
CN110842931B (en) * 2019-07-30 2022-03-22 南京埃斯顿机器人工程有限公司 Tool posture adjusting method applied to robot punching
JP7285162B2 (en) * 2019-08-05 2023-06-01 ファスフォードテクノロジ株式会社 Die bonding apparatus and semiconductor device manufacturing method
CN111012506B (en) * 2019-12-28 2021-07-27 哈尔滨工业大学 Robot-assisted puncture surgery end tool center calibration method based on stereoscopic vision
CN112208113B (en) * 2020-08-13 2022-09-06 苏州赛米维尔智能装备有限公司 Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof
CN113819839B (en) * 2020-10-13 2022-08-23 常州铭赛机器人科技股份有限公司 Automatic pasting calibration method, device and equipment
CN112589401B (en) * 2020-11-09 2021-12-31 苏州赛腾精密电子股份有限公司 Assembling method and system based on machine vision
CN112496696A (en) * 2020-11-24 2021-03-16 福州大学 Automatic assembling vision measuring system for radio frequency line inside smart phone
CN112991461A (en) * 2021-03-11 2021-06-18 珠海格力智能装备有限公司 Material assembling method and device, computer readable storage medium and processor
CN113510697B (en) * 2021-04-23 2023-02-14 知守科技(杭州)有限公司 Manipulator positioning method, device, system, electronic device and storage medium
CN113103238A (en) * 2021-04-26 2021-07-13 福建(泉州)哈工大工程技术研究院 Hand-eye calibration method based on data optimization
CN113059579A (en) * 2021-04-30 2021-07-02 中国铁建重工集团股份有限公司 Flexible operation device
CN113382555B (en) * 2021-08-09 2021-10-29 常州铭赛机器人科技股份有限公司 Chip mounter suction nozzle coaxiality error automatic calibration method based on machine vision
CN114347013A (en) * 2021-11-05 2022-04-15 深港产学研基地(北京大学香港科技大学深圳研修院) Method for assembling printed circuit board and FPC flexible cable and related equipment
EP4033871B1 (en) 2021-11-06 2024-04-10 Fitech sp. z o.o. Method of mounting suitable for positioning through-hole components on a printed circuit board pcb
CN113894817B (en) * 2021-11-15 2023-07-25 广东天凛高新科技有限公司 Crawler-type intelligent pouring robot work method
CN114293779B (en) * 2021-11-15 2023-10-03 广东天凛高新科技有限公司 Crawler-type intelligent pouring robot
CN114322933B (en) * 2021-12-28 2024-06-11 珠海市运泰利自动化设备有限公司 Visual feedback compensation method based on tray inclination angle
CN114872038B (en) * 2022-04-13 2023-12-01 欣旺达电子股份有限公司 Micro-needle buckling vision self-calibration system and calibration method thereof
CN115101947A (en) * 2022-05-06 2022-09-23 南京航空航天大学 Online compensation method and system for assembly error of array antenna unit

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6021533B2 (en) * 2012-09-03 2016-11-09 キヤノン株式会社 Information processing system, apparatus, method, and program
CN105234943B (en) * 2015-09-09 2018-08-14 大族激光科技产业集团股份有限公司 A kind of industrial robot teaching device and method of view-based access control model identification
CN107470170B (en) * 2017-07-13 2019-03-19 上海第二工业大学 PCB detection sorting system and method based on machine vision

Also Published As

Publication number Publication date
CN108766894A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108766894B (en) A kind of chip attachment method and system of robot vision guidance
CN110497187B (en) Sun flower pattern assembly system based on visual guidance
CN109719734B (en) Robot vision-guided mobile phone flashlight assembling system and assembling method
CN109483531B (en) Machine vision system and method for picking and placing FPC board by manipulator at fixed point
CN109029299B (en) Dual-camera measuring device and method for butt joint corner of cabin pin hole
CN112223285B (en) Robot hand-eye calibration method based on combined measurement
CN111645074A (en) Robot grabbing and positioning method
CN109443206B (en) System and method for measuring tail end pose of mechanical arm based on color spherical light source target
CN109448054A (en) The target Locate step by step method of view-based access control model fusion, application, apparatus and system
CN107192331A (en) A kind of workpiece grabbing method based on binocular vision
CN110751691B (en) Automatic pipe fitting grabbing method based on binocular vision
CN110717943A (en) Method and system for calibrating eyes of on-hand manipulator for two-dimensional plane
CN110136204B (en) Sound film dome assembly system based on calibration of machine tool position of bilateral telecentric lens camera
CN106625673A (en) Narrow space assembly system and assembly method
CN114494045A (en) Large-scale straight gear geometric parameter measuring system and method based on machine vision
CN110648362B (en) Binocular stereo vision badminton positioning identification and posture calculation method
CN111405842B (en) Pin self-adaptive positioning plug-in mounting method and system for three-pin electronic component
CN113103235B (en) Method for vertically operating cabinet surface equipment based on RGB-D image
CN110136068B (en) Sound membrane dome assembly system based on position calibration between bilateral telecentric lens cameras
CN111862238A (en) Full-space monocular light pen type vision measurement method
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN115205286B (en) Method for identifying and positioning bolts of mechanical arm of tower-climbing robot, storage medium and terminal
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN110125662B (en) Automatic assembling system for sound film dome
CN106441238A (en) Positioning device and positioning navigation algorithm of robot based on infrared visual technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant