CN113483664B - Screen plate automatic feeding system and method based on line structured light vision - Google Patents

Screen plate automatic feeding system and method based on line structured light vision Download PDF

Info

Publication number
CN113483664B
CN113483664B CN202110817326.3A CN202110817326A CN113483664B CN 113483664 B CN113483664 B CN 113483664B CN 202110817326 A CN202110817326 A CN 202110817326A CN 113483664 B CN113483664 B CN 113483664B
Authority
CN
China
Prior art keywords
point cloud
screen plate
pose
point
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110817326.3A
Other languages
Chinese (zh)
Other versions
CN113483664A (en
Inventor
王志远
邰凤阳
康庆
朱远鹏
王化明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Kepai Fali Intelligent System Co.,Ltd.
Original Assignee
Cubespace Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cubespace Co ltd filed Critical Cubespace Co ltd
Priority to CN202110817326.3A priority Critical patent/CN113483664B/en
Publication of CN113483664A publication Critical patent/CN113483664A/en
Application granted granted Critical
Publication of CN113483664B publication Critical patent/CN113483664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G35/00Mechanical conveyors not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G47/00Article or material-handling devices associated with conveyors; Methods employing such devices
    • B65G47/74Feeding, transfer, or discharging devices of particular kinds or types
    • B65G47/90Devices for picking-up and depositing articles or materials
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a screen plate automatic feeding system and method based on line structured light vision, wherein the method comprises the following steps: calibrating the system; when the AGV transports the screen board to the scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen board, and the scanned data is sent to the upper computer; the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot; the robot snatchs the material loading to the screen board according to the processing result of host computer. This application has realized automatic material loading, greatly reduced workman's intensity of labour, has improved production efficiency and has guaranteed production quality simultaneously.

Description

Screen plate automatic feeding system and method based on line structured light vision
Technical Field
The invention belongs to the field of screen manufacturing, and particularly relates to an automatic screen plate feeding system and method based on line structured light vision.
Background
The screen is used as an important component of the traditional furniture in China for a long time. The screen is generally arranged at a remarkable position in a room, and has the functions of separation, beautification, wind shielding, coordination and the like. At present, the manufacturing process of the screen is mostly manual operation, a large amount of time and manpower are needed, and the operation precision can not be effectively guaranteed. The traditional machine can only act according to a programmed program due to the lack of the assistance of various sensors, the material placing position also needs to be set in advance, and random tasks such as grabbing randomly placed objects cannot be completed.
Disclosure of Invention
The embodiment of the application provides a screen plate automatic feeding system and method based on line structured light vision, which can perform automatic feeding, greatly reduce the labor intensity of workers, improve the production efficiency and ensure the production quality.
In a first aspect, an embodiment of the present application provides a screen panel automatic feeding system based on line structured light vision, including:
the system comprises a line structure light vision module, a rotary platform, a PLC (programmable logic controller), an upper computer, a robot and an AGV;
the AGV is used for transporting the screen plate to the scanning area of the linear structured light vision module;
the PLC is used for controlling the rotation of the rotating platform;
the line structured light vision module is fixed on the rotary platform and used for scanning the screen board and sending the scanned data to the upper computer;
the upper computer is used for processing the screen panel data scanned by the line structured light vision module and sending the processed result to the robot;
and the robot is used for grabbing and feeding the screen plate according to the processing result of the upper computer.
The line structured light vision module comprises a CCD industrial camera, a word red line laser and a light filter, a preset included angle is formed between the CCD industrial camera and the word red line laser, and the light filter is installed in front of a lens of the CCD industrial camera.
In a second aspect, the present application provides a screen panel automatic feeding method based on line structure light vision, which utilizes the above screen panel automatic feeding system based on line structure light vision, and includes:
calibrating the system;
when the AGV transports the screen panel to a scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen panel, and the scanned data is sent to the upper computer;
the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot;
and the robot grabs and feeds the screen plate according to the processing result of the upper computer.
Wherein, the host computer handles the screen panel data of line structure light vision module scanning includes:
the upper computer processes the screen plate data scanned by the line structured light vision module to obtain the pose of the screen plate under a camera coordinate system;
and converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
Wherein, the host computer is handled the screen board data that line structure light vision module scanned obtains the position appearance of screen board under the camera coordinate system, includes:
assuming that (a, b, c) is a point on a rotating shaft of the rotating platform, (u, v, w) is a direction vector of a rotating axis, and theta is a rotating angle, a coordinate transformation matrix for single-frame line-scan point cloud splicing under a camera coordinate system is as follows:
Figure BDA0003170642700000031
performing point cloud down-sampling by using an improved voxel filtering algorithm, creating a three-dimensional voxel grid for input point cloud data, and representing all points in a voxel by using a point closest to a voxel gravity center point in original point cloud data;
carrying out primary plane model segmentation on the point cloud data through a sampling consistency algorithm, and then continuously segmenting through a region growing method to remove vertical cluster points and noise points to finally obtain a skeleton of the point cloud of the screen plate;
and (4) registering the point cloud of the framework of the screen plate.
Wherein, the registration of the screen plate skeleton point cloud comprises the following steps:
extracting the boundary of the screen panel framework point cloud by a latitude and longitude scanning method, then performing straight line fitting on four edges of the screen panel framework by using an RANSAC algorithm, and then calculating the spatial coordinate values of four angular points of the screen panel framework;
performing rough registration based on Euclidean distance constraint, and performing point cloud rough registration on the extracted angular points and the model angular points by using an ICP (inductively coupled plasma) algorithm; let p 1 And p 2 The coarse registration result and the corresponding point pair on the target point cloud, c 1 And c 2 Respectively, the geometric centers of the rough registration point cloud and the target point cloud, wherein delta is a distance constraint threshold, and if the distance constraint threshold is met:
Figure BDA0003170642700000032
then consider p to be 1 And p 2 If the matching effect meets the requirement, the matching effect is considered not to meet the requirement, and the corresponding point pair is removed;
and introducing fine registration of the weight coefficients and the iteration factors.
The method comprises the following steps of extracting a boundary of a screen panel framework point cloud by a latitude and longitude scanning method, performing straight line fitting on four edges of the screen panel framework by using an RANSAC algorithm, and calculating space coordinate values of four angular points of the screen panel framework, wherein the method comprises the following steps:
solving the maximum value x of the x coordinate value of the point cloud data point max And the minimum value x min (ii) a Given a resolution r, a division step Δ x = (x) is calculated max -x min ) R; scanning the point cloud, and counting the x coordinate value at [ x ] min +(i-1)Δx,x min A point where the y coordinate within + i Δ x) (i =1,2,l, r) takes a minimum value and a maximum value; similarly, scanning the point cloud once again along the y direction, and forming a boundary of the point cloud by using results of the two scans;
giving a distance threshold value d, and performing straight line fitting on four edges of the screen plate skeleton by using a RANSAC algorithm;
and calculating the space coordinates of the four vertexes of the screen plate according to an equation of straight lines of the four edges of the framework of the screen plate.
Wherein, the fine registration of the weight coefficient and the iteration factor is introduced, which comprises the following steps:
s3.4.3.1: giving an original point cloud P and a target point cloud Q, and initializing a transformation matrix H 0 =H * Which isMiddle H * As a result of the coarse registration, the weight coefficient α > 1, the dynamic iteration factor m =0, and the iteration number k =0;
s3.4.3.2: by the amount of change Δ H of the pose matrix k To update the original point cloud P;
s3.4.3.3: searching each point in the original point cloud P for the closest point in the target point cloud Q, and reordering the target point set according to the closest point;
s3.4.3.4: by passing
Figure BDA0003170642700000041
Solving the variation delta H of the pose matrix k+1 Wherein p is i 、q i Points on the point clouds P and Q, n p 、n' p Respectively the number of points of a non-interesting area and an interesting area on the point cloud P, wherein the weight coefficient alpha is more than 1;
s3.4.3.5: if the root mean square distance error err increases, let m = m +1; otherwise let m =0;
s3.4.3.6: if m > 0, perform H k+1 =ΔH k+1 ·H k M times to solve the pose transformation matrix;
s3.4.3.7: and repeating the steps S3.4.3.2-S3.4.3.6 until the root mean square distance error err is smaller than a given value or the iteration number k reaches the maximum value.
Wherein, turn into the position appearance under the robot coordinate system with the position appearance of screen board under the camera coordinate system, include:
pose matrix of robot end effector under robot base coordinate system in grabbing state
Figure BDA0003170642700000042
The pose conversion relation between the tail end of the robot and the paw is obtained by calibrating a tool coordinate system
Figure BDA0003170642700000043
Defining the pose conversion relation between the target part and the paw under the grabbing gesture according to the size structure of the part and the manipulator
Figure BDA0003170642700000044
Obtaining a position and posture matrix of the captured screen plate in a camera coordinate system through point cloud registration as
Figure BDA0003170642700000045
The pose conversion relation between the camera coordinate system and the robot base coordinate system is obtained through hand-eye calibration
Figure BDA0003170642700000051
Then by calculating the equation:
Figure BDA0003170642700000052
and calculating a pose matrix of the robot end effector under the robot base coordinate system in the grabbing state.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program is used for implementing the steps of any one of the above methods when executed by a processor.
The screen plate automatic feeding system and method based on line structured light vision have the following beneficial effects:
this application screen board automatic feeding method based on line structure light vision includes: calibrating the system; when the AGV transports the screen panel to a scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen panel, and the scanned data is sent to the upper computer; the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot; and the robot grabs and feeds the screen plate according to the processing result of the upper computer. This application has realized automatic material loading, greatly reduced workman's intensity of labour, improved production efficiency and guaranteed production quality simultaneously.
Drawings
Fig. 1 is a schematic structural diagram of a screen panel automatic feeding system based on line structured light vision according to the present application;
FIG. 2 is a schematic structural diagram of another screen panel automatic feeding system based on line structured light vision according to the present application;
fig. 3 is a schematic flow chart of a screen plate automatic feeding method based on line structured light vision in the embodiment of the present application;
FIG. 4 is a flow chart of the visual positioning software of the present application;
FIG. 5 is a first flowchart of a point cloud registration algorithm in the present application;
FIG. 6 is a second flowchart of a point cloud registration algorithm in the present application;
FIG. 7 is a screen plate skeleton result diagram obtained by point cloud segmentation in the present application;
FIG. 8.1 is a first diagram of the result of point cloud registration in the present application;
fig. 8.2 is a graph two of the result of point cloud registration in the present application.
Detailed Description
The present application is further described with reference to the following figures and examples.
In the following description, the terms "first" and "second" are used for descriptive purposes only and are not intended to indicate or imply relative importance. The following description provides embodiments of the invention, which may be combined or substituted for various embodiments, and this application is therefore intended to cover all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes features a, B, C and another embodiment includes features B, D, then this application should also be construed to include embodiments that include all other possible combinations of one or more of a, B, C, D, although such embodiments may not be explicitly recited in the following text.
The following description provides examples, and does not limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements described without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For example, the described methods may be performed in a different order than described, and various steps may be added, omitted, or combined. Furthermore, features described with respect to some examples may be combined into other examples.
The screen is an important component of the traditional furniture in China and has a long history. The screen is generally arranged at a remarkable position in a room, and has the functions of separation, beautification, wind shielding, coordination and the like. At present, the manufacturing process of the screen is mostly manual operation, a large amount of time and manpower are needed, and the operation precision can not be effectively guaranteed. The traditional machine can only act according to a programmed program due to the lack of the assistance of various sensors, the material placing position also needs to be set in advance, and random tasks such as grabbing randomly placed objects cannot be completed. Therefore, it is urgently needed to develop an apparatus capable of performing automatic loading and unloading, so as to greatly reduce the labor intensity of workers, improve the production efficiency and ensure the production quality.
The invention relates to a screen panel automatic feeding system and method based on line structured light vision. The system consists of a line structured light vision module, a rotating platform, a PLC controller, a robot and an AGV. The method comprises the following specific steps: the method comprises the following steps of calibration of a measuring system (comprising camera calibration, structured light plane calibration, rotary platform axis calibration and robot hand-eye calibration), structured light center line extraction, three-dimensional point cloud data generation, point cloud data processing and pose estimation. The invention selects the industrial CCD camera with high resolution, drives the line structure light vision measuring system through the rotating platform, and can carry out large-scale scanning and high-precision non-contact measurement. After three-dimensional point cloud data of a scanned object (a screen plate) is obtained, pose information of the screen plate under a camera coordinate system is obtained through processing such as sampling, segmentation and registration, and pose information of the screen plate under a robot coordinate system is obtained through calibration, so that grabbing and feeding are performed. The invention can be applied to a production line, and can be used for scanning a screen plate to generate point cloud and judging the pose by a machine vision technology, so that automatic loading and unloading can be carried out, the labor intensity of workers can be greatly reduced, and meanwhile, the production efficiency can be improved and the production quality can be ensured.
As shown in fig. 1-2, the screen panel automatic feeding system based on line structured light vision of the present application includes: a line structured light vision module 12, a rotary platform 11, a PLC (Programmable Logic Controller) Controller 10, an upper computer 13, a robot 14, and an AGV15.
AGVs (Automated Guided vehicles, abbreviated AGVs) are also commonly referred to as AGV carts. The present invention relates to a transport vehicle equipped with an electromagnetic or optical automatic navigation device, capable of traveling along a predetermined navigation route, and having safety protection and various transfer functions. The industrial application does not need a driver's transport vehicle, and a rechargeable storage battery is used as a power source of the industrial application. The traveling path and behavior can be controlled by a computer, or the traveling path can be set up by an electromagnetic track (electromagnetic path-following system), the electromagnetic track is fixed on the floor, and the unmanned transportation vehicle moves and acts by the information of the electromagnetic track.
The AGV15 is used for transporting the screen board 16 to the scanning area of the line structured light vision module 12; the PLC 10 is used for controlling the rotation of the rotary platform 11; the line structured light vision module 12 is fixed on the rotary platform 11 and is used for scanning the screen board and sending the scanned data to the upper computer 13; the upper computer 13 is used for processing the screen panel data scanned by the line structured light vision module 12 and sending the processing result to the robot 14; and the robot 14 is used for grabbing and feeding the screen plates according to the processing result of the upper computer 13.
The line structured light vision module 12 includes a CCD (charge coupled device) industrial camera, a line red laser, and a filter. The CCD industrial camera and the linear red line laser form a certain included angle and the relative position is fixed. The filter is a narrow-band red light filter and is arranged in front of a lens of the CCD industrial camera. The upper computer 13 is, for example, a computer. As shown in FIG. 2, the vision sensor 121 scans the screen panels being transported on the AGV15.
In some embodiments, the visual positioning software runs in the upper computer, and comprises a system calibration module, an image processing module, a point cloud processing module, a PLC control module and a robot control module. The system calibration module comprises camera calibration, structured light plane calibration, rotary platform axis calibration and robot eye calibration. The image processing module mainly separates the linear structure light stripe from the background and extracts the central line of the linear structure light stripe. The point cloud processing module is used for splicing, down-sampling, segmenting and registering the point cloud data of the screen plate generated by scanning.
This application screen aerofoil automatic feeding system based on line structure light vision can carry out automatic feeding, greatly reduced workman's intensity of labour, improved production efficiency simultaneously and guaranteed production quality.
As shown in fig. 3-8.2, the present application provides a method for automatically feeding a screen plate based on line structure light vision, and the system for automatically feeding a screen plate based on line structure light vision comprises: s101, calibrating a system; s103, when the AGV transports the screen board to a scanning area of the linear structure light vision module, the PLC controls the rotation platform to rotate, the linear structure light vision module on the rotation platform scans the screen board, and scanned data are sent to the upper computer; s105, the upper computer processes the screen panel data scanned by the linear structured light vision module and sends the processed result to the robot; s107, the robot grabs and feeds the screen plate according to the processing result of the upper computer. As described in detail below.
And S101, calibrating the system (if the system is calibrated, the system is not needed).
The method comprises the following steps: calibrating a camera, namely calibrating the camera by a Zhang calibration method to obtain internal parameters and external parameters of the camera;
calibrating a structured light plane, namely calibrating the structured light plane by adopting a direct method, and fitting by using a least square method to obtain an equation of the structured light plane;
calibrating the axis of the rotating platform, obtaining external parameters by means of camera calibration and fitting a space circle to obtain coordinates of a plurality of points on the rotating axis of the rotating platform, and then fitting an equation of the rotating axis by a least square method;
calibrating the hands and eyes of the robot, and realizing the calibration of the hands and eyes of the robot outside the hands by a Tsai-Lenz algorithm;
and calibrating a tool coordinate system, determining the position of the tool coordinate system through TCP calibration, and determining the posture of the tool coordinate system through TCF calibration.
S103, when the AGV transports the screen board to the scanning area of the linear structure light vision module, the PLC controls the rotation of the rotating platform, the linear structure light vision module on the rotating platform scans the screen board, and the scanned data are sent to the upper computer.
When the AGV transports the screen plate to the lower part of the linear structure light scanning system, the linear structure light scanning system is started, and the PLC controls the rotating platform to rotate, so that the whole scanning of the screen plate is completed.
And S105, processing the screen panel data scanned by the linear structured light vision module by the upper computer, and sending the processed result to the robot.
The screen board data that line structure light vision module scanned is handled to host computer includes: s1051, the upper computer processes the screen plate data scanned by the line structured light vision module to obtain the pose of the screen plate in the camera coordinate system; and S1052, converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
And S1051, processing the screen plate data scanned by the line structured light vision module by the upper computer to obtain the pose of the screen plate in the camera coordinate system.
In the step, after the visual module finishes scanning, a point cloud processing and pose estimation module of software is operated to obtain the pose of the screen plate under a camera coordinate system.
Assuming that (a, b, c) is a point on a rotating shaft of the rotating platform, (u, v, w) is a direction vector of a rotating axis, and theta is a rotating angle, a coordinate transformation matrix for single-frame line scanning point cloud splicing under a camera coordinate system is as follows:
Figure BDA0003170642700000101
point cloud down-sampling using an improved voxel filtering algorithm. Voxel filtering is performed by creating a three-dimensional voxel grid of the input point cloud data, with the center of gravity of all points in each voxel being used to approximate all points within the voxel. Since the point is not necessarily a point in the original point cloud, the loss of fine features in the original point cloud is caused. Therefore, the point closest to the voxel gravity center point in the original point cloud data can be used for replacing the voxel gravity center point, so that the expression accuracy of the point cloud data is improved. And performing point cloud down-sampling by using an improved voxel filtering algorithm, creating a three-dimensional voxel grid for the input point cloud data, and representing all points in the voxel by using the point closest to the center of gravity of the voxel in the original point cloud data.
And point cloud segmentation, namely performing primary plane model segmentation on the point cloud data through a sampling consistency algorithm, and then continuously segmenting through a region growing method to remove vertical cluster points and noise points to finally obtain a skeleton of the screen plate point cloud.
And Point cloud registration, namely performing registration of the screen board skeleton Point cloud by using an improved ICP (Iterative Closest Point) algorithm.
The step of point cloud registration comprises:
extracting point cloud characteristics, namely extracting the boundary of the point cloud of the screen panel framework by a latitude and longitude scanning method, then performing linear fitting on four edges of the screen panel framework by using an RANSAC algorithm, and then calculating the space coordinate values of four angular points of the screen panel framework;
performing rough registration based on Euclidean distance constraint, and performing point cloud rough registration on the extracted angular points and the model angular points by using an ICP (inductively coupled plasma) algorithm; let p 1 And p 2 The coarse registration result and the corresponding point pair on the target point cloud, c 1 And c 2 Respectively, the geometric centers of the rough registration point cloud and the target point cloud, wherein delta is a distance constraint threshold, and if the distance constraint threshold is met:
Figure BDA0003170642700000102
then consider p to be 1 And p 2 If the matching effect meets the requirement, the matching effect is considered not to meet the requirement, and the corresponding point pair is removed;
point cloud fine registration, wherein fine registration of a weight coefficient and an iteration factor is introduced, and the pose obtained by coarse registration is finely adjusted, so that the registration result is more accurate. In order to improve the accuracy of registration and increase the robustness of the algorithm, a weight coefficient alpha and a dynamic iteration factor m are introduced.
The method for extracting the point cloud features comprises the following steps:
solving the maximum value x of the x coordinate value of the point cloud data point max And minimum value x min (ii) a Given a resolution r, a division step Δ x = (x) is calculated max -x min ) R; scanning the point cloud, and counting the x coordinate value at [ x ] min +(i-1)Δx,x min A point at which the y coordinate in the range of + i Δ x) (i =1,2,l, r) takes a minimum value and a maximum value; scanning the point cloud once again along the y direction by imitating the steps, and forming the boundary of the point cloud by using the results of the two scans;
giving a distance threshold value d, and performing straight line fitting on four edges of the screen plate skeleton by using an RANSAC algorithm;
and calculating the space coordinates of four vertexes of the screen plate according to an equation of a straight line where four edges of the framework of the screen plate are located.
And (4) calculating the intersection point of the two space straight lines, namely the middle point of the two vertical feet of the two space straight lines, which is the minimum common vertical line. Is provided with two different plane straight lines L 1 And L 2 ,P 0 、P 1 Is L 1 Two points of (1), Q 0 、Q 1 Is L 2 At two points above, a and m are arbitrary constants.
Straight line l 1 And l 2 Can be expressed as
P=aP 0 +(1-a)P 1
Q=mQ 0 +(1-m)Q 1
Wherein P and Q are each L 1 And L 2 A point above. Calculating L 1 And L 2 Is solved by the shortest distance
min(P-Q) 2
And then converted into an equation for solving the hyperstatic equation
Ax=b
Wherein
A=(P 0 -P 1 ,Q 0 -Q 1 ),x=(a,-m) T ,b=Q 1 -P 1
Can find out
x=(A T A) -1 A T b
Further, the coordinates of P and Q are (x) P ,y P ,z P )、(x Q ,y Q ,z Q )。
Finally obtain L 1 ,L 2 Coordinates of the intersection point of
Figure BDA0003170642700000121
The method for accurately registering the point cloud comprises the following steps:
s3.4.3.1, giving an original point cloud P and a target point cloud Q, and initializing a transformation matrix H 0 =H * In which H is * As a result of the coarse registration, the weight coefficient α is greater than 1, the dynamic iteration factor m =0, and the iteration number k =0;
s3.4.3.2 changes delta H of pose matrix k Updating the original point cloud P;
s3.4.3.3, searching the closest point in the target point cloud Q for each point in the original point cloud P, and reordering the target point set according to the closest point;
s3.4.3.4 by
Figure BDA0003170642700000122
Solving the variation delta H of the pose matrix k+1 Wherein p is i 、q i Points on the point clouds P and Q, n p 、n' p Respectively the points of a non-interest area and an interest area on the point cloud P, and the weight coefficient alpha is more than 1;
s3.4.3.5 if the root mean square distance error err increases, let m = m +1; otherwise let m =0;
s3.4.3.6 if m > 0, execute H k+1 =ΔH k+1 ·H k The pose transformation matrix is solved for m times;
s3.4.3.7 repeat steps S3.4.3.2-S3.4.3.6 until the root mean square distance error err is less than the given value or the iteration number k reaches the maximum value.
And S1052, converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
Pose matrix of robot end effector under robot base coordinate system in grabbing state
Figure BDA0003170642700000123
The pose conversion relation between the tail end of the robot and the paw is obtained by calibrating a tool coordinate system
Figure BDA0003170642700000124
Defining the pose conversion relation between the target part and the paw under the grabbing gesture according to the size structure of the part and the manipulator
Figure BDA0003170642700000125
Obtaining a position and posture matrix of the captured screen plate in a camera coordinate system through point cloud registration as
Figure BDA0003170642700000126
The pose conversion relation between the camera coordinate system and the robot base coordinate system is obtained through hand-eye calibration
Figure BDA0003170642700000127
Then by calculating:
Figure BDA0003170642700000131
and calculating a pose matrix of the robot end effector under the robot base coordinate system in the grabbing state.
S107, the robot grabs and feeds the screen plate according to the processing result of the upper computer.
The controller (of the robot) obtains the control signal from the host computer, and the robot is controlled to grab and feed the screen plate.
The invention has the following beneficial effects: first, compared with the currently used worker feeding method, the method only needs a simple visual system consisting of an industrial camera, a line laser and the like, and can save production cost and improve production efficiency in practical application. Secondly, the invention uses the sampling consistency and the region growing algorithm to carry out point cloud segmentation, carries out pose estimation by a point cloud template matching method, and has certain adaptability to the deformation condition of the screen plate.
In the present application, the embodiment of the screen panel automatic feeding method based on the line structured light vision is basically similar to the embodiment of the screen panel automatic feeding system based on the line structured light vision, and related points can be referred to each other.
It is clear to a person skilled in the art that the solution according to the embodiments of the present invention can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, an FPGA (Field-Programmable Gate Array), an IC (Integrated Circuit), or the like.
The embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and the program is executed by a processor to realize the steps of the screen panel automatic feeding method based on the line structure light vision. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or in other forms.
All functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A screen plate automatic feeding method based on line structured light vision is characterized by comprising the following steps:
calibrating the system;
when the AGV transports the screen board to the scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen board, and the scanned data is sent to the upper computer;
the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot;
the robot grabs and feeds the screen plate according to the processing result of the upper computer;
extracting the point cloud boundary of the screen plate skeleton by a longitude and latitude scanning method, then performing straight line fitting on four edges of the screen plate skeleton by using a RANSAC algorithm, and then calculating the spatial coordinate values of four angular points of the screen plate skeleton, wherein the method comprises the following steps:
solving the maximum value x of the point cloud data point x coordinate value max And the minimum value x min (ii) a Given a resolution r, a division step Δ x = (x) is calculated max -x min ) R; scanning the point cloud, and counting the x coordinate value at [ x ] min +(i-1)Δx,x min A point in the range of + i Δ x) at which the y coordinate takes a minimum and a maximum, where i =1, 2.·, r; similarly, the point cloud is scanned once again along the y direction, and the point cloud is formed by using the results of the two scansThe boundary of (a);
giving a distance threshold value d, and performing straight line fitting on four edges of the screen plate skeleton by using a RANSAC algorithm;
calculating the space coordinates of four vertexes of the screen plate according to the equation of the straight line where the four sides of the framework of the screen plate are located;
is provided with two different plane straight lines L 1 And L 2 ,P 0 、P 1 Is L 1 Two points of (1), Q 0 、Q 1 Is L 2 Two points of (a) m is an arbitrary constant;
straight line l 1 And l 2 Can be expressed as
P=aP 0 +(1-a)P 1
Q=mQ 0 +(1-m)Q 1
Wherein P and Q are each L 1 And L 2 A point on; calculating L 1 And L 2 Is solved by the shortest distance
min(P-Q) 2
And then converted into an equation for solving the hyperstatic equation
Ax=b
Wherein
A=(P 0 -P 1 ,Q 0 -Q 1 ),x=(a,-m) T ,b=Q 1 -P 1
Can find out
x=(A T A) -1 A T b
Further, the coordinates of P and Q are (x) respectively P ,y P ,z P )、(x Q ,y Q ,z Q );
Finally obtain L 1 ,L 2 Coordinates of the intersection point of
Figure FDA0003828895700000021
The host computer handles the screen panel data that line structure light vision module scanned, includes:
introducing fine registration of the weight coefficients and the iteration factors, wherein the fine registration comprises the following steps:
s3.4.3.1: given theAn original point cloud P and a target point cloud Q, and an initialization transformation matrix H 0 =H * In which H is * As a result of the coarse registration, the weight coefficient α > 1, the dynamic iteration factor m =0, and the iteration number k =0;
s3.4.3.2: by variation Δ H of the pose matrix k Updating the original point cloud P;
s3.4.3.3: searching each point in the original point cloud P for the closest point in the target point cloud Q, and reordering the target point set according to the closest point;
s3.4.3.4: by passing
Figure FDA0003828895700000022
Solving the variation delta H of the pose matrix k+1 Wherein p is i 、q i Points, n, on point clouds P and Q, respectively p 、n' p Respectively the points of a non-interest area and an interest area on the point cloud P, and the weight coefficient alpha is more than 1;
s3.4.3.5: if the root mean square distance error err increases, let m = m +1; otherwise let m =0;
s3.4.3.6: if m > 0, perform H k+1 =ΔH k+1 ·H k M times to solve the pose transformation matrix;
s3.4.3.7: and repeating the steps S3.4.3.2-S3.4.3.6 until the root mean square distance error err is smaller than a given value or the iteration number k reaches the maximum value.
2. The line structured light vision based screen panel automatic feeding method according to claim 1, wherein the upper computer processes the screen panel data scanned by the line structured light vision module, and the method comprises the following steps:
the upper computer processes the screen plate data scanned by the line structured light vision module to obtain the pose of the screen plate under a camera coordinate system;
and converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
3. The line structure light vision based screen plate automatic feeding method of claim 2, wherein the upper computer processes the screen plate data scanned by the line structure light vision module to obtain the pose of the screen plate under a camera coordinate system, and the method comprises the following steps:
assuming that (a, b, c) is a point on a rotating shaft of the rotating platform, (u, v, w) is a direction vector of a rotating axis, and theta is a rotating angle, a coordinate transformation matrix for single-frame line-scan point cloud splicing under a camera coordinate system is as follows:
Figure FDA0003828895700000031
performing point cloud down-sampling by using an improved voxel filtering algorithm, creating a three-dimensional voxel grid for input point cloud data, and representing all points in a voxel by using a point closest to a voxel gravity center point in original point cloud data;
carrying out primary plane model segmentation on the point cloud data through a sampling consistency algorithm, and then continuously segmenting through a region growing method to remove vertical cluster points and noise points to finally obtain a skeleton of the point cloud of the screen plate;
and carrying out registration of the screen plate skeleton point cloud.
4. The line structured light vision-based screen panel automatic feeding method of claim 3, wherein the registering of the screen panel skeleton point cloud comprises:
extracting the point cloud boundary of the screen plate skeleton by a longitude and latitude scanning method, then performing straight line fitting on four edges of the screen plate skeleton by using a RANSAC algorithm, and then calculating the spatial coordinate values of four angular points of the screen plate skeleton;
performing rough registration based on Euclidean distance constraint, and performing point cloud rough registration on the extracted angular points and the model angular points by using an ICP (inductively coupled plasma) algorithm; let p 1 And p 2 The coarse registration result and the corresponding point pair on the target point cloud, c 1 And c 2 Respectively, the geometric centers of the rough registration point cloud and the target point cloud, wherein delta is a distance constraint threshold, and if the distance constraint threshold is met:
Figure FDA0003828895700000041
then consider p to be 1 And p 2 If the matching effect meets the requirement, the matching effect is considered not to meet the requirement, and the corresponding point pair is removed;
and introducing fine registration of the weight coefficients and the iteration factors.
5. The linear structured light vision-based screen plate automatic feeding method of any one of claims 1 to 4, wherein the step of converting the pose of the screen plate under a camera coordinate system into the pose under a robot coordinate system comprises the following steps:
pose matrix of robot end effector under robot base coordinate system in grabbing state
Figure FDA0003828895700000042
The pose conversion relation between the tail end of the robot and the paw is obtained by calibrating a tool coordinate system
Figure FDA0003828895700000043
Defining the pose conversion relation between the target part and the paw under the grabbing gesture according to the size structure of the part and the manipulator
Figure FDA0003828895700000044
Obtaining a pose matrix of the grasped screen plate in a camera coordinate system through point cloud registration
Figure FDA0003828895700000045
The pose conversion relation between the camera coordinate system and the robot base coordinate system is obtained through hand-eye calibration
Figure FDA0003828895700000046
Then by calculating:
Figure FDA0003828895700000047
and calculating a pose matrix of the robot end effector in the robot base coordinate system in the grabbing state.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN202110817326.3A 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision Active CN113483664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110817326.3A CN113483664B (en) 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110817326.3A CN113483664B (en) 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision

Publications (2)

Publication Number Publication Date
CN113483664A CN113483664A (en) 2021-10-08
CN113483664B true CN113483664B (en) 2022-10-21

Family

ID=77942321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110817326.3A Active CN113483664B (en) 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision

Country Status (1)

Country Link
CN (1) CN113483664B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387347B (en) * 2021-10-26 2023-09-19 浙江视觉智能创新中心有限公司 Method, device, electronic equipment and medium for determining external parameter calibration
CN117140627B (en) * 2023-10-30 2024-01-26 诺梵(上海)***科技股份有限公司 Screen production line

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093543A (en) * 2007-06-13 2007-12-26 中兴通讯股份有限公司 Method for correcting image in 2D code of quick response matrix
CN103424086A (en) * 2013-06-30 2013-12-04 北京工业大学 Image collection device for internal surface of long straight pipe
CN105067023A (en) * 2015-08-31 2015-11-18 中国科学院沈阳自动化研究所 Panorama three-dimensional laser sensor data calibration method and apparatus
CN105115560A (en) * 2015-09-16 2015-12-02 北京理工大学 Non-contact measurement method for cabin capacity
CN108180825A (en) * 2016-12-08 2018-06-19 中国科学院沈阳自动化研究所 A kind of identification of cuboid object dimensional and localization method based on line-structured light
CN109249392A (en) * 2018-08-31 2019-01-22 先临三维科技股份有限公司 Calibration method, calibration element, device, equipment and the medium of workpiece grabbing system
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109900204A (en) * 2019-01-22 2019-06-18 河北科技大学 Large forgings size vision measurement device and method based on line-structured light scanning
CN109927036A (en) * 2019-04-08 2019-06-25 青岛小优智能科技有限公司 A kind of method and system of 3D vision guidance manipulator crawl
CN110335297A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on feature extraction
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
CN110455189A (en) * 2019-08-26 2019-11-15 广东博智林机器人有限公司 A kind of vision positioning method and transfer robot of large scale material
CN110728623A (en) * 2019-08-27 2020-01-24 深圳市华讯方舟太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium
CN111062938A (en) * 2019-12-30 2020-04-24 科派股份有限公司 Plate expansion plug detection system and method based on machine learning
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN111820545A (en) * 2020-06-22 2020-10-27 浙江理工大学 Method for automatically generating sole glue spraying track by combining offline and online scanning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334267B (en) * 2008-07-25 2010-11-24 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device
US10628949B2 (en) * 2017-12-18 2020-04-21 Samsung Electronics Co., Ltd. Image processing with iterative closest point (ICP) technique
CN109489548B (en) * 2018-11-15 2019-11-12 河海大学 A kind of part processing precision automatic testing method using three-dimensional point cloud
CN109559338B (en) * 2018-11-20 2020-10-27 西安交通大学 Three-dimensional point cloud registration method based on weighted principal component analysis method and M estimation
CN109934859B (en) * 2019-03-18 2023-03-24 湖南大学 ICP (inductively coupled plasma) registration method based on feature-enhanced multi-dimensional weight descriptor
CN111553938A (en) * 2020-04-29 2020-08-18 南京航空航天大学 Multi-station scanning point cloud global registration method based on graph optimization
CN112053432B (en) * 2020-09-15 2024-03-26 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093543A (en) * 2007-06-13 2007-12-26 中兴通讯股份有限公司 Method for correcting image in 2D code of quick response matrix
CN103424086A (en) * 2013-06-30 2013-12-04 北京工业大学 Image collection device for internal surface of long straight pipe
CN105067023A (en) * 2015-08-31 2015-11-18 中国科学院沈阳自动化研究所 Panorama three-dimensional laser sensor data calibration method and apparatus
CN105115560A (en) * 2015-09-16 2015-12-02 北京理工大学 Non-contact measurement method for cabin capacity
CN108180825A (en) * 2016-12-08 2018-06-19 中国科学院沈阳自动化研究所 A kind of identification of cuboid object dimensional and localization method based on line-structured light
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109249392A (en) * 2018-08-31 2019-01-22 先临三维科技股份有限公司 Calibration method, calibration element, device, equipment and the medium of workpiece grabbing system
CN109900204A (en) * 2019-01-22 2019-06-18 河北科技大学 Large forgings size vision measurement device and method based on line-structured light scanning
CN109927036A (en) * 2019-04-08 2019-06-25 青岛小优智能科技有限公司 A kind of method and system of 3D vision guidance manipulator crawl
CN110335297A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on feature extraction
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
CN110455189A (en) * 2019-08-26 2019-11-15 广东博智林机器人有限公司 A kind of vision positioning method and transfer robot of large scale material
CN110728623A (en) * 2019-08-27 2020-01-24 深圳市华讯方舟太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium
CN111062938A (en) * 2019-12-30 2020-04-24 科派股份有限公司 Plate expansion plug detection system and method based on machine learning
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN111820545A (en) * 2020-06-22 2020-10-27 浙江理工大学 Method for automatically generating sole glue spraying track by combining offline and online scanning

Also Published As

Publication number Publication date
CN113483664A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN112417591B (en) Vehicle modeling method, system, medium and equipment based on holder and scanner
CN111775152B (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
US9707682B1 (en) Methods and systems for recognizing machine-readable information on three-dimensional objects
US9327406B1 (en) Object segmentation based on detected object-specific visual cues
CN113483664B (en) Screen plate automatic feeding system and method based on line structured light vision
JP5788460B2 (en) Apparatus and method for picking up loosely stacked articles by robot
CN110243380B (en) Map matching method based on multi-sensor data and angle feature recognition
CN113096094B (en) Three-dimensional object surface defect detection method
CN112906127B (en) Vehicle modeling method, system, medium and equipment based on holder and scanner
US20230419531A1 (en) Apparatus and method for measuring, inspecting or machining objects
Premachandra et al. A study on hovering control of small aerial robot by sensing existing floor features
CN114474056A (en) Grabbing operation-oriented monocular vision high-precision target positioning method
CN113532277A (en) Method and system for detecting plate-shaped irregular curved surface workpiece
CN113269723A (en) Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts
JP5544464B2 (en) 3D position / posture recognition apparatus and method for an object
CN113601501B (en) Flexible operation method and device for robot and robot
CN114387344A (en) Cargo carrying tool pose detection method and device, carrying vehicle and medium
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
JP2023041731A (en) Method and computing system for performing robot motion planning and repository detection
US20240003675A1 (en) Measurement system, measurement device, measurement method, and measurement program
CN115131208A (en) Structured light 3D scanning measurement method and system
CN110060330B (en) Three-dimensional modeling method and device based on point cloud image and robot
CN113345023A (en) Positioning method and device of box body, medium and electronic equipment
Nakhaeinia et al. Surface following with an RGB-D vision-guided robotic system for automated and rapid vehicle inspection
CN111854678A (en) Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230512

Address after: 226000 No.9 Longyou Road, Wuyao Town, Rugao City, Nantong City, Jiangsu Province

Patentee after: Jiangsu Kepai Fali Intelligent System Co.,Ltd.

Address before: 225000 KEPAI Co., Ltd., No. 11, Jingang Road, Yangzhou City, Jiangsu Province

Patentee before: CUBESPACE CO.,LTD.

TR01 Transfer of patent right