WO2021226891A1 - 3d printing device and method based on multi-axis linkage control and machine visual feedback measurement - Google Patents

3d printing device and method based on multi-axis linkage control and machine visual feedback measurement Download PDF

Info

Publication number
WO2021226891A1
WO2021226891A1 PCT/CN2020/090093 CN2020090093W WO2021226891A1 WO 2021226891 A1 WO2021226891 A1 WO 2021226891A1 CN 2020090093 W CN2020090093 W CN 2020090093W WO 2021226891 A1 WO2021226891 A1 WO 2021226891A1
Authority
WO
WIPO (PCT)
Prior art keywords
printing
image
algorithm
robotic arm
color
Prior art date
Application number
PCT/CN2020/090093
Other languages
French (fr)
Chinese (zh)
Inventor
李俊
高银
谢银辉
唐康来
Original Assignee
中国科学院福建物质结构研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院福建物质结构研究所 filed Critical 中国科学院福建物质结构研究所
Priority to PCT/CN2020/090093 priority Critical patent/WO2021226891A1/en
Priority to PCT/CN2021/093520 priority patent/WO2021228181A1/en
Priority to CN202110523612.9A priority patent/CN113674299A/en
Publication of WO2021226891A1 publication Critical patent/WO2021226891A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/20Apparatus for additive manufacturing; Details thereof or accessories therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • B29C64/393Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y30/00Apparatus for additive manufacturing; Details thereof or accessories therefor

Definitions

  • the invention relates to a 3D printing device and method based on multi-axis linkage control and machine vision feedback measurement, and belongs to the field of automation technology.
  • the present invention provides a 3D printing device based on multi-axis linkage control and machine vision feedback, including: a robot arm, a nozzle, a camera, a printing table, and a driving device and/or a transmission device, wherein the mechanical
  • the arm is a multi-axis robotic arm, preferably a six-axis robotic arm.
  • the cameras are preferably provided with more than four cameras, for example, a four-lens camera may be provided. More preferably, the camera is arranged around the robotic arm, for example, around it.
  • the present invention also provides a real-time tracking and positioning method for the end of a 3D printing nozzle based on pre-optimization, which includes the following steps:
  • step 1) the printing environment is adjusted and optimized through the pre-optimization method of color segmentation and/or the rapid multi-exposure fusion method.
  • step 2) four camera samples are trained according to the CNN (Convolutional Neural Network) model, the target object is judged and recognized, and the primary ROI area where the target object is located is marked.
  • CNN Convolutional Neural Network
  • step 3 a fast least squares filtering method with adaptive boundary limitation is used to perform edge-preserving and smoothing processing on the image, thereby performing optimization processing on the primary selected ROI region.
  • the tracking and positioning of the end of the 3D printing nozzle is monitored in real time through a visual method, and the printing algorithm is corrected in real time according to the positioning feedback.
  • the method is implemented by using the 3D printing device based on multi-axis linkage control and machine vision feedback.
  • the method for real-time tracking and positioning of the end of the 3D printing nozzle based on pre-optimization may further include the following steps:
  • the algorithm for robotic arm detection is activated to detect the tilt direction of the robotic arm, and activate two cameras hanging around the robotic arm and facing the tilt direction to form a binocular vision system;
  • the pre-optimization method of color segmentation and the fast multi-exposure fusion method are used to adjust the quality of the image;
  • target tracking is performed on the print head through the relevant filtering target tracking algorithm (that is, the tracking algorithm adopts the relevant filtering method);
  • the tilt direction of the camera with respect to the robot arm is determined by the posture information fed back from the end of the robot arm.
  • the method for real-time tracking and positioning of the end of a 3D printing nozzle based on pre-optimization has a process basically as shown in FIG. 4.
  • a pre-optimization method for color segmentation is also provided, and an exemplary process thereof is shown in FIG. 3.
  • the pre-optimization method for color segmentation includes the following steps:
  • the color thresholds are compared in the H, S, and V spaces respectively according to the predetermined threshold range.
  • the print head colors in the experiment can be multiple, for example, five selected from red, purple, green, cyan and blue;
  • the predetermined threshold range is as follows:
  • the fast multi-exposure fusion method includes: continuously collecting images during the operation of the camera, and continuously calculating the average brightness of the image, if it is lower than the set brightness value, start the fast multi-exposure fusion method to perform the image processing. optimization.
  • the rapid multi-exposure fusion method includes the following steps:
  • the brightness threshold is 30, which is considered a reasonable brightness interval between [30,255-30].
  • the interval is set to 1, and the rest are set to 0.
  • the exposure weight map In order to obtain the exposure weight map;
  • the input image is fused to optimize the image.
  • a method for target tracking of print nozzles which includes tracking using a correlation filtering target tracking algorithm (KCF).
  • KCF correlation filtering target tracking algorithm
  • the Hog feature is first extracted from multiple regions around the selected ROI area, and then the circulant matrix is used to solve the ROI area selected in the next frame.
  • the boundary limitation adaptively adjusts the area of the image boundary through a tolerance mechanism to further regularize the image.
  • the method of the related filtering target tracking algorithm first proposes an effective alternative to seek the solution of the objective function defined on the weighted L2 norm (1), including decomposing the objective function into each One space dimension, and use the 1-dimensional fast solving method to solve the matrix; then, the method is extended to a more general case, by solving the objective function defined on the weighted norm L r (0 ⁇ r ⁇ 2) or using the existing EP filter, aggregate data items that cannot be realized in this filter.
  • the K-means method is used for classification processing.
  • the selected ROI area can be divided into 3 categories to distinguish the end of the nozzle, the printing plate surface and the printing material.
  • the category of the print head is the second category of the K-means function classification process, so according to the present invention, only the second category is extracted, and the remaining categories are set to white.
  • the second classified image is obtained to improve the shielding of noise interference, and the Canny detection method can be used to effectively obtain the edge of the end of the print nozzle.
  • these edge points are used as the data points of the Hough line detection, and the Hough line detection is performed.
  • the threshold between the two straight lines is set to 10.
  • the number of fitting pieces in the Hough straight line detection process is set to 3 or less.
  • the position of the coordinate point and the real-time status of the printer being turned on are judged in real time.
  • the end point of the print nozzle is below the fitting straight line, and it is determined that the printer is still moving, it means that the printing is still working, and this position is the actual position obtained.
  • the end point of the print nozzle is located above or outside the fitting straight line, no matter whether the printer is working or not, it means that the printing process is approaching to stop. The detection should be stopped immediately and the position should be deleted.
  • the camera when the camera is switched so that the opposite camera cannot form a binocular to detect the end position during the movement of the print nozzle, the direction and position of the print nozzle are reacquired, and the two opposite directions are reselected.
  • the camera again uses the CNN model training method to obtain the initial ROI area, and then uses the tracking algorithm and detection algorithm to obtain the three-dimensional position of the nozzle in real time, feedback and adjusts the printing process until the end.
  • step 1) at least one of the following training steps is preferably performed:
  • Input more than three complex printing models (for example, surface transformations and models with more types, preferably including as many models as possible in the form of all surfaces in the printing process);
  • the collected video in step d), can be converted into an image, and the area of the nozzle in the image can be marked as a training sample.
  • the reason for the accuracy problem of the existing 3D printing method applied to artificial bone scaffold material is that in the three-coordinate printing method adopted, the input model at the beginning of printing may have deviations.
  • the input model at the beginning of printing may have deviations.
  • the computer since the end of the robot arm will stop retreating after printing, the end point is above the contour curve. Therefore, it is necessary to determine the position of the end point of the print head in real time to determine the actual point of the real print head.
  • the computer first receives the ideal position of the end of the robotic arm. Due to the influence of the size of the print head, the distance between the print head and the camera, and the quality of the captured image, it is impossible to set this point as the center, and the relevant area is set as the initial ROI area.
  • the method of deep learning is adopted in the present invention to obtain the initial target area.
  • a fast multi-exposure fusion method is proposed.
  • This method uses the advantages of multi-exposure fusion to effectively adjust the interference influence of uneven ambient light on the acquired image. It is of great help to improve the training accuracy of the image and the accuracy of the final ROI area.
  • the monitoring method of the present invention based on the feedback process of pre-optimized 3D printing can effectively improve the quality of the printed image and the position accuracy of the nozzle at the end of the robot arm printing, and solves the problem of the lack of feedback system and image collection, which makes the 3D printing process vulnerable to the outside world.
  • the environment's interference with the material at the end of the print head enables the position information of the end of the print head to be accurately and effectively detected in real time, provide reliable information for feedback correction, and correct the trajectory of 3D printing in real time.
  • the flexibility and efficiency of the method of the present invention enable a significant acceleration of a series of applications, which usually need to solve large linear systems.
  • Fig. 1 is a schematic diagram of the 3D printing device of the present invention.
  • Figure 2 is a flowchart of the algorithm for acquiring the initial ROI region.
  • Figure 3 is a flow chart of the pre-optimized algorithm for color segmentation.
  • Figure 4 shows the overall flow chart of the algorithm.
  • Figure 5 is a flow chart of a fast multi-exposure fusion method.
  • Figure 6 is a detection diagram of the tracking sub-process, where a: nozzle tracking position; b: K-means detection diagram; c: edge detection diagram; d: Hough straight line detection diagram; e: end point positioning diagram.
  • Figure 7 is a schematic diagram of finding the intersection of Hough lines.
  • Figure 8 is a schematic diagram of error compensation.
  • Figure 9 is a schematic diagram of a hybrid printing path.
  • Figure 10 is a schematic diagram of a free-form surface model and its point cloud.
  • Figure 11 is a schematic diagram of the cross-section and the overall point cloud fitting.
  • Figure 12 shows the point cloud triangulation and normal vector calculation results
  • Figure 13 is a schematic diagram of Euler angle calculation.
  • Figure 14 is a photo of a flat print sample.
  • Figure 15 is a photo of the surface coating effect.
  • Figure 16 shows the detection method of Example 2.
  • Robot Universal Robot UR3
  • Vision system Medvision industrial camera, AVT GX6600B, lens: BT-F036.
  • the 3D printing system mainly includes a six-axis robotic arm, a four-axis linkage printing platform, a print head, and a visual tracking positioning module.
  • the movement of the six-axis robotic arm is used to achieve spray printing on the surface of complex and fine objects, and the end of the print nozzle is tracked and positioned by a multi-eye camera, combined with the four-axis linkage printing platform for printing compensation movement, to achieve high precision on the surface of the bioprosthesis 3D printing.
  • the exterior of the 3D printing system uses aluminum alloy brackets, and the walls use relatively lightweight PC compression panels.
  • the six-axis robotic arm used in the system has six spatial degrees of freedom, high mobility, and can achieve precise positioning in complex curved spaces.
  • the print nozzle is installed at the end of the six-axis robotic arm.
  • the six-axis robotic arm controls the print Three-dimensional pattern printing of the body surface.
  • the discharge of the print head is controlled by an electronic pressure regulator to ensure the uniformity of the discharge.
  • the printing platform is a four-axis linkage platform with four degrees of freedom of movement, including linear movement in the three directions of X, Y, and Y and rotational movement in the Z direction, which can move at the same time.
  • the four-axis linkage platform consists of three linear modules and a high-precision turntable. Through the three-dimensional movement in space and the rotation in the Z-axis direction, the six-axis mechanical arm controls the movement and positioning of the print head, adjusts the printing position, and realizes complex curved surfaces. Patterned printing.
  • the main function of the vision hardware solution is to determine the three-dimensional space position of the needle, realize the measurement of the printing needle, the automatic position correction, and the center measurement of the needle tip and the needle mark. According to the functional requirements and accuracy requirements of the system, a detection scheme for the vision measurement system was designed.
  • the multi-view camera system to be used in this project the light source adopts the opposite side, double-row LED design, the light can be remotely adjusted according to needs.
  • the vision system is composed of two sets of binocular systems, which are arranged around each other, and the binocular system can be dynamically added depending on the actual situation.
  • the parallel binocular system measurement method has the advantages of high efficiency, appropriate accuracy, simple system structure, and low cost. It is very suitable for online, non-contact product inspection and quality control at the manufacturing site.
  • the parallel binocular system is a more effective measurement method. Since the robotic arm is blocked during the movement, two sets of vision systems are required to ensure that the probe can detect it. At the same time, multiple camera systems provide more data and can more accurately determine the position of the probe.
  • the robotic arm detection algorithm When the robotic arm detection algorithm is turned on, it detects the tilt direction of the robotic arm, and activates two cameras hanging around the robotic arm and facing the tilt direction to form a binocular vision system, where the camera is facing the tilt direction of the robotic arm by the robotic arm The posture information fed back at the end is determined.
  • the pre-optimization method of color segmentation and the fast multi-exposure fusion method are used to adjust the quality of the image.
  • the pre-optimization method for color segmentation includes the following steps:
  • the colors of the nozzles printed in the experiment are five kinds selected from red, purple, green, cyan and blue;
  • the predetermined threshold range is as follows:
  • the visual tracking processing algorithm for the end of the 3D printing nozzle of this project includes a tracking and positioning module, a nozzle extraction module, and a terminal 3D point detection module.
  • High-precision visual measurement and tracking is a technology that detects, recognizes and tracks specific moving targets based on the combination of machine vision and automation technology, and measures its three-dimensional coordinate information. Firstly, the best two are selected independently from the position information of the end of the robotic arm through the high-precision cameras set up all around to form binocular stereo vision; secondly, the target tracking model based on the combination of preset methods and related filtering methods is used to control the printer The end of the print head is effectively identified and tracked; finally, the Hough line detection method is used to extract the end point of the print head, and the position of the end point is calculated according to the principle of parallax.
  • the visual equipment After the visual equipment obtains the actual printed end point, it compares it with the actual three-dimensional point input into the computer model. If the error at this point is within a certain threshold, the four-axis linkage platform will not start; if it exceeds the error threshold, the corresponding compensation will be initiated according to the error level.
  • the acquisition of the end point of the 3D printing nozzle is mainly divided into the positioning method of the initial position of the target, the target tracking method, the target extraction algorithm and the end point extraction method.
  • the image is optimized by fast multi-exposure fusion method to adjust brightness.
  • the brightness threshold is 30, which is considered a reasonable brightness interval between [30,255-30].
  • the interval is set to 1, and the rest are set to 0.
  • the exposure weight map In order to obtain the exposure weight map;
  • the input image is fused to optimize the image.
  • the tracking and positioning of the end of the 3D printing nozzle is the most important step in the entire visual inspection. It is the prerequisite for other visual inspections.
  • the main function is to separate the printing nozzle from the entire field of view and become an independent processing unit for subsequent modules. Detection. In this step, first train samples of four cameras according to the CNN (Convolutional Neural Network) model, judge and recognize the target object, and mark the primary ROI area where the target object is located.
  • CNN Convolutional Neural Network
  • the main purpose of tracking the end of the print head is to effectively extract the three-dimensional information of the end point.
  • traditional tracking algorithms are difficult to meet the needs of the project, so we adopt a discriminant model and use classic correlation filtering tracking algorithms.
  • the correlation tracking filter is divided into three steps: First, the frame I t in the vicinity of the current position of the sample P t, a training regressor. This regressor can calculate the response of a small window sampling. Secondly, in the It +1 frame, samples are taken near the position P t of the previous frame, and the response of each sample is judged by the aforementioned regression. Finally, the sample with the strongest response is taken as the current frame position P t+1 .
  • the target object When the target object is being tracked, we can obtain the area where the target object is located in real time through the tracking frame, and then extract the target amount.
  • the target area is filtered by fast least squares method with adaptive boundary limitation to smooth the input target area. Since the image at the end of the print head has a relatively large contrast in the entire image, the classic K-Means (kmeans) algorithm is used for classification processing to extract the image of the largest class in the figure. After K-Means classification processing, the end of the nozzle is effectively segmented, as shown in Figure 6.
  • the three-dimensional coordinate points firstly perform grayscale processing on the acquired end target; secondly, threshold the grayscale image of the target object and perform smoothing processing; then, perform edge detection on the smoothed image; again, Hough Straight line detection to obtain the intersection point; finally, the three-dimensional coordinates of the end of the nozzle are obtained.
  • the printing platform control includes the following parts:
  • the movable printing platform with four-axis high-precision stepping and servo control can be used to compensate the printing position error in real time;
  • Control scheme design Control based on PLC programmable controller and visual error compensation system for real-time error information compensation control;
  • ⁇ Intelligent control of print head Adjust the print volume of the print head by controlling the pressure of the air pressure by analog quantity.
  • the control process is: After the four-axis linkage platform system is started, press reset, each axis of the platform will automatically find the origin, and reset the axes in sequence. After the axis reset is completed, press the start system to automatically wait for the compensation information of the visual error compensation system of the host computer. Position compensation, after the compensation is completed, the compensation information is obtained again for automatic compensation, and it runs back and forth in this way.
  • the electrical control system adopts PLC as the main controller according to requirements, and the control unit is two servo motors, two stepping motors, and an analog pressure valve.
  • the PLC and the robot adopt TCP communication control.
  • the robot reaches the designated printing position, the designated data is sent to the server through TCP, and the PLC obtains the designated air pressure information and the start signal to start or close the analog pressure valve.
  • the vision system performs real-time position detection, by calculating real-time position error information, data sorting and analysis, and then sent to the server, while the PLC obtains the data for position compensation, thereby improving printing accuracy.
  • the printing platform adopts a four-axis moving device for automatic reset and position compensation, and can start any axis for position compensation in real time.
  • PLC visual error system
  • robot system can access database information at the same time through TCP standard protocol, so as to realize multi-terminal data information sharing.
  • robotic arm system There are three subsystems in the current system: robotic arm system, vision system, and motion platform system. It is necessary to build a communication system to enable these three subsystems to communicate with each other to ensure the real-time and accuracy of the system's printing process.
  • the C/S architecture is used for communication.
  • the communication system is mainly based on .net remoting and TCP/IP protocols, and has written server-side programs and multiple client-side programs.
  • the client can send messages to the service program, and the service program can broadcast information to the client program. You can open multiple clients to communicate at the same time.
  • the vision subsystem and the robotic arm subsystem respectively send the detected position coordinates and the ideal position coordinates to the server; the server generates the coordinate difference after calculation and broadcasts it to each subsystem. Among them, the vision subsystem and the robotic arm subsystem ignore this message, and the motion platform system receives this message and adjusts the position.
  • the four cameras collect pictures through the capture card, and then call the needle point detection algorithm to extract the needle point coordinate position (including time information) of each picture. Then the coordinate positions of these needle tips are stored in the "collection point database”.
  • the robot system transmits the motion coordinates of the robot arm to the vision system through the Remoting communication protocol, and stores it in the motion trajectory data table.
  • the needle tip detection algorithm can also use the motion trajectory data table information.
  • the printing actuator is a UR3 robotic arm with a needle tube at the end, and inkjet printing is achieved by pressing and extruding the material during printing.
  • the printed material is located on the four-axis linkage platform, and the position identification of the printing end point is obtained by multi-eye stereo vision measurement.
  • the preprocessing of the print model is divided into two different processes.
  • the processing flow of model data is to plan a single-chip path after slicing, and finally form a complete printing path;
  • the processing flow of model data is to first find the path points on the surface. Then triangulate the path points to find the normal vector of each path point, and then calculate the attitude parameters of the control manipulator based on the normal vector, and finally combine the position of the path point to form the control file of the free-form surface spraying path.
  • this embodiment uses binocular vision to detect the end position of the nozzle, calculate the position error of the path point, and then compensate the position error by moving the platform to reduce the error to overcome the above problem.
  • the compensation process is: set the spraying path point at a distance of about 1mm in the XY plane, and send a signal to the vision system when the end of the robotic arm reaches a certain path point, and the vision system collects the end position and compares it with the preset position , The compensation value is fed back to the mobile platform, and the mobile platform moves back and forth according to the compensation value (moving distance is half the deviation value) to compensate for the deviation of the end of the manipulator to achieve the effect of error compensation.
  • the error compensation effect is shown in Figure 8.
  • the samples printed directly through the zigzag pattern have poor plane uniformity, especially the convex hull phenomenon at the corners.
  • the robot arm is controlled to move in a hybrid path, and the movement trajectory is shown in Figure 9.
  • the hybrid path (Figure 9a) improves the edge accuracy compared to the zigzag path, but the printing material accumulates at the edge corners, so the edge path adds control points ( Figure 9b) to improve the running speed of the turning path after the control points And reducing the material extrusion pressure can effectively reduce the material accumulation problem at the inflection point.
  • the control of the end position and posture of the robotic arm is the core key technology of 3D printing.
  • the surface model of Figure 10a is used to illustrate the data processing process of free-form surface spraying.
  • the point cloud scanned by the line laser sensor is shown in Figure 10b. It can be seen that the point cloud obtained by scanning has a fault in the height direction of the surface, which has a greater impact on the accuracy of surface spraying. Therefore, it is necessary to fit the surface point cloud according to the error situation.
  • the X-axis point cloud scanning interval of the sensor to 0.3mm, and the Y-axis scanning interval to 1mm.
  • two-dimensional point fitting is used instead of three-dimensional surface reconstruction. Fit the X-axis cross-section of the point cloud model first, and then fit the Y-axis cross-section.
  • the fitting method is the least square method. After fitting each group of points, they are merged into a new three-dimensional model. When the model surface is more complicated, the fitting function of each point set can be different. This can ensure that the final surface model has a higher accuracy. Take a set of points as an example for graphical display.
  • the result of the second-order function fitting is the closest to the original model, as shown in Figure 11a.
  • Indigo* represents the original point
  • the blue point represents the second-order function fitting result
  • the red o The dots represent the 4th order function fitting result.
  • the original point cloud model and the fitted point cloud model are shown in Figure 11b, and the step effect error is eliminated after fitting.
  • the normal vector of each point in the point cloud model needs to be calculated as the attitude control parameter when the robot arm sprays to that point.
  • the triangulation method is used to reconstruct the surface, and the vector is calculated from the adjacent points and cross-multiplied to obtain the normal vector of the point. The calculation result is shown in Figure 12.
  • the attitude control parameters of the UR3 manipulator need to be calculated according to the normal vector.
  • the calculation process is to first calculate the corresponding Euler angle (roll, pitch, yaw) according to the space normal vector, and then convert it into the rotation vector Rx, Ry, Rz that controls the posture of the robotic arm. Combining the control positions x, y, z of the path points, the 6-dimensional control vector (x, y, z, Rx, Ry, Rz) of the robotic arm can be obtained.
  • the posture parameter is [0 0 0]
  • the end posture of UR3 is the vector [0 0 1].
  • the roll angle of the vector [1 2 3] is the angle 0.588 (expressed in radians) between the vector [0 2 3] and the vector [0 0 1], and the direction is negative;
  • the pitch angle is 0.322 between the vector [1 0 3] and the vector [0 0 1], and the direction is positive; and the yaw angle is 0.
  • the 6-dimensional control vector of all path points of the free-form surface can be obtained, thereby realizing printing control.
  • UR3 can realize offline remote programming control through its own API library file.
  • the control platform includes C#, Python, etc.
  • the UR3 control software was developed in C# language on the VS platform.
  • the main functions of the software include: XYZ translation control of UR3, single-axis rotation control, current position display, Z-shaped, spiral, Ring-shaped path printing control, read path files for printing, etc.
  • the free-form surface coating is carried out with the model shown in Figure 10, and the Z-shaped path is adopted.
  • the coating effect is shown in Figure 4.13. It can be seen from the coating effect that the robotic arm achieves the expected printing effect, but the printing accuracy needs to be improved.
  • the UR robotic arm performs curved surface coating printing, there will still be a slight jitter phenomenon at a movement speed of 1mm/s, while flat printing will not.
  • the greater the deflection angle of the end of the robotic arm the greater the jitter, which is a major factor affecting the printing accuracy.
  • Another important factor that affects the printing accuracy is the extrusion speed of the material.
  • the external environment interferes with the material at the end of the print head, so that the position information of the end of the print head can be accurately and effectively detected in real time, providing reliable information for feedback correction, real-time correction of the 3D printing trajectory, and the printing accuracy is significantly improved.

Landscapes

  • Chemical & Material Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Materials Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A 3D printing device based on multi-axis linkage control and machine visual feedback, comprising a robotic arm, a spray head, cameras, a printing platform, and a drive device and/or a transmission device, wherein the robotic arm is a multi-axis robotic arm, preferably a six-axis robotic arm; and preferably, at least four cameras are provided and are arranged around the robotic arm. Further provided is a pre-optimization-based method for the real-time tracking and positioning of the tail end of a 3D printing spray head, by means of which real-time positioning and correction of high-precision intelligent 3D printing of artificial bones is achieved. The method comprises: firstly, acquiring a target initial recognition box by means of a CNN method; secondly, using a new fast multi-exposure fusion method to pre-optimize an input image so as to improve image quality and recognition precision; then, using a correlation filtering algorithm to track a target; and finally, using a self-adaptive boundary-constrained fast least squares filtering method, a Kmeans algorithm, an expansion and corrosion algorithm, a Canny algorithm and a Hough line detection algorithm to realize three-dimensional positioning of a spray head tail end point. An artificial bone scaffold material formed by means of the device and the method is expected to replace a conventional method, and can prevent a patient from secondary trauma, realize individualized customization of a scaffold and greatly help people with bone defects.

Description

基于多轴联动控制和机器视觉反馈测量的3D打印装置及方法3D printing device and method based on multi-axis linkage control and machine vision feedback measurement 技术领域Technical field
本发明涉及一种基于多轴联动控制和机器视觉反馈测量的3D打印装置及方法,属于自动化技术领域。The invention relates to a 3D printing device and method based on multi-axis linkage control and machine vision feedback measurement, and belongs to the field of automation technology.
背景技术Background technique
随着生活水平的日益提高,人们对医疗保健更加关注。然而因为一些疾病、交通事故等导致人体骨骼严重损伤(又称“骨缺损”),导致许多病人失去生活自理能力,给病人及其家庭都带来了严重的影响。当前,针对骨骼损伤问题,特别是大块骨缺损修复这一临床骨科难题,主要的解决方法是依靠自体组织移植、异体组织移植或使用替代材料修复等。然而,这些方法均存在较大弊端,如需两次手术、存在取骨量限制、可能传播疾病、成骨活性低下等。例如,自体骨是从患者身体的其他部位获取一些骨骼,然后用到需要的部位,但这种方式存在取骨量有限,并且无法获得满足需求的形状等问题。With the improvement of living standards, people pay more attention to medical care. However, some diseases and traffic accidents have caused serious damage to human bones (also known as "bone defects"), causing many patients to lose their ability to take care of themselves, which has brought serious impacts on patients and their families. At present, the main solution to the problem of bone injury, especially the clinical orthopedic problem of repairing large bone defects, is to rely on autologous tissue transplantation, allogeneic tissue transplantation or the use of alternative materials for repair. However, these methods have major drawbacks, such as the need for two operations, the limitation of bone extraction, the possibility of disease transmission, and low osteogenic activity. For example, autologous bone is obtained from other parts of the patient's body, and then used the required parts, but this method has the problem of limited bone extraction and the inability to obtain a shape that meets the needs.
组织工程学的发展为骨缺损修复提供了新思路,人工骨有望取代传统自体或同种异体骨,避免患者二次创伤,为此人工骨支架材料及制备也成为了研究热点。3D打印技术可以调控支架孔径、孔隙率、连通性以及比表面积,还可以实现支架个体化定制。然而,现有3D打印设备应用于人工骨支架材料制备时的精度不高,亟需进一步改进以提高人工骨支架材料的应用范围以及稳定性和安全性。The development of tissue engineering provides new ideas for bone defect repair. Artificial bone is expected to replace traditional autologous or allogeneic bone and avoid secondary trauma to patients. For this reason, artificial bone scaffold materials and preparation have also become research hotspots. 3D printing technology can control the pore size, porosity, connectivity and specific surface area of the scaffold, and can also realize individual customization of the scaffold. However, the accuracy of the existing 3D printing equipment used in the preparation of artificial bone scaffold materials is not high, and further improvement is urgently needed to improve the application range, stability and safety of artificial bone scaffold materials.
发明内容Summary of the invention
为改善上述技术问题,本发明提供一种基于多轴联动控制和机器视觉反馈的3D打印装置,包括:机械臂、喷头、相机、打印台,以及驱动装置和/或传动装置,其中所述机械臂为多轴机械臂,优选六轴机械臂。In order to improve the above technical problems, the present invention provides a 3D printing device based on multi-axis linkage control and machine vision feedback, including: a robot arm, a nozzle, a camera, a printing table, and a driving device and/or a transmission device, wherein the mechanical The arm is a multi-axis robotic arm, preferably a six-axis robotic arm.
根据本发明的实施方案,所述相机优选设置四个以上,例如可以设置四目相机。更优选地,所述相机设置在机械臂周围,例如其四周。According to the embodiment of the present invention, the cameras are preferably provided with more than four cameras, for example, a four-lens camera may be provided. More preferably, the camera is arranged around the robotic arm, for example, around it.
本发明还提供一种基于预优化的3D打印喷头末端实时跟踪定位方法,包括如下步骤:The present invention also provides a real-time tracking and positioning method for the end of a 3D printing nozzle based on pre-optimization, which includes the following steps:
1)对打印环境进行调整和优化;1) Adjust and optimize the printing environment;
2)确定目标物体所在的初选ROI(Region of Interest)区域;2) Determine the primary ROI (Region of Interest) where the target object is located;
3)对初选ROI区域进行优化处理。3) Optimize the primary selected ROI area.
根据本发明的实施方案,步骤1)中,通过颜色分割的预优化方法和/或快速多曝光融合方法对打印环境进行调整和优化。According to the embodiment of the present invention, in step 1), the printing environment is adjusted and optimized through the pre-optimization method of color segmentation and/or the rapid multi-exposure fusion method.
根据本发明的实施方案,步骤2)中,根据CNN(Convolutional Neural Network)模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域。According to the embodiment of the present invention, in step 2), four camera samples are trained according to the CNN (Convolutional Neural Network) model, the target object is judged and recognized, and the primary ROI area where the target object is located is marked.
根据本发明的实施方案,步骤3)中,使用自适应边界限制的快速最小二乘法滤波方法,对图像进行保边平滑处理,从而对初选ROI区域进行优化处理。According to the embodiment of the present invention, in step 3), a fast least squares filtering method with adaptive boundary limitation is used to perform edge-preserving and smoothing processing on the image, thereby performing optimization processing on the primary selected ROI region.
根据本发明的实施方案,通过视觉的方法来实时监测3D打印喷头末端的跟踪定位,根据定位反馈并实时修正打印的算法。According to the embodiment of the present invention, the tracking and positioning of the end of the 3D printing nozzle is monitored in real time through a visual method, and the printing algorithm is corrected in real time according to the positioning feedback.
根据本发明的实施方案,所述方法通过使用所述基于多轴联动控制和机器视觉反馈的3D打印装置实现。According to an embodiment of the present invention, the method is implemented by using the 3D printing device based on multi-axis linkage control and machine vision feedback.
根据本发明的实施方案,所述基于预优化的3D打印喷头末端实时跟踪定位方法可以进一步包括如下步骤:According to the embodiment of the present invention, the method for real-time tracking and positioning of the end of the 3D printing nozzle based on pre-optimization may further include the following steps:
i)向3D打印机输入需要打印的模型;i) Input the model to be printed into the 3D printer;
ii)机械臂检测的算法开启,检测机械臂的倾斜方向,并启动悬挂在机械臂周围且正对于倾斜方向的两个相机,组成双目视觉***;ii) The algorithm for robotic arm detection is activated to detect the tilt direction of the robotic arm, and activate two cameras hanging around the robotic arm and facing the tilt direction to form a binocular vision system;
iii)通过该双目视觉***,运用颜色分割的预优化方法和快速多曝光融合方法,调节图像的质量;iii) Through the binocular vision system, the pre-optimization method of color segmentation and the fast multi-exposure fusion method are used to adjust the quality of the image;
iv)根据CNN模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域;iv) Train samples of four cameras according to the CNN model, judge and recognize the target object, and mark the primary ROI area where the target object is located;
v)在该初选ROI区域设定的情况下,通过相关滤波目标跟踪算法(即跟踪算法采用相关滤波方法),对打印喷头进行目标跟踪;v) In the case of setting the primary ROI area, target tracking is performed on the print head through the relevant filtering target tracking algorithm (that is, the tracking algorithm adopts the relevant filtering method);
vi)通过跟踪框实时提取目标,对该区域的图像运用自适应边界限制的快速最小二乘法滤波处理,保边平滑该图像;vi) Extract the target in real time through the tracking frame, apply the fast least squares filtering process with adaptive boundary limit to the image of the region, and smooth the image with edge preservation;
vii)根据处理后的特征,运用K-means算法进行分类,并获取喷头所在的类的图像,分割出目标;vii) According to the processed features, use the K-means algorithm to classify, and obtain the image of the category where the nozzle is located, and segment the target;
viii)通过Canny算法获取打印末端的外轮廓,再运用Hough直线检测方法,对该边缘图像进行直线检测,计算交汇的中点位置。viii) Obtain the outer contour of the printing end through the Canny algorithm, and then use the Hough line detection method to perform line detection on the edge image to calculate the midpoint position of the intersection.
根据本发明的实施方案,相机正对于机械臂的倾斜方向由机械臂末端反馈的姿态信息确定。According to the embodiment of the present invention, the tilt direction of the camera with respect to the robot arm is determined by the posture information fed back from the end of the robot arm.
根据本发明示例性的实施方案,所述基于预优化的3D打印喷头末端实时跟踪定位方法具有基本上如图4所示的流程。According to an exemplary embodiment of the present invention, the method for real-time tracking and positioning of the end of a 3D printing nozzle based on pre-optimization has a process basically as shown in FIG. 4.
根据本发明的实施方案,还提供一种颜色分割的预优化方法,其示例性的流程如图3所示。所述颜色分割的预优化方法包括如下步骤:According to an embodiment of the present invention, a pre-optimization method for color segmentation is also provided, and an exemplary process thereof is shown in FIG. 3. The pre-optimization method for color segmentation includes the following steps:
a)对输入的彩色图通过颜色转换函数,转到HSV颜色空间;a) The input color image is transferred to the HSV color space through the color conversion function;
b)在H、S和V空间分别按照预定的阈值范围进行颜色阈值的比对,实验中打印的喷头颜色可以有多种,例如选自红色、紫色、绿色、青色和蓝色的五种;b) The color thresholds are compared in the H, S, and V spaces respectively according to the predetermined threshold range. The print head colors in the experiment can be multiple, for example, five selected from red, purple, green, cyan and blue;
当颜色处于该预定的阈值范围内时候,选择为有效值;如果不在该范围,则去掉该值;When the color is within the predetermined threshold range, select the effective value; if it is not within the range, remove the value;
c)对获取的图像进行平滑优化处理,此处选择中值滤波,去掉单点的噪声;c) Carry out smoothing and optimization processing on the acquired image, here choose the median filter to remove the single point of noise;
d)对平滑处理后的图像进行轮廓提取,即绘制每个独立物体的外接矩形,通过长宽比和面积来去掉多余的无关矩形内的目标;d) Extract the contour of the smoothed image, that is, draw the circumscribed rectangle of each independent object, and remove the redundant target in the irrelevant rectangle through the aspect ratio and area;
e)根据选择优化的结果,绘制选择后的图像,其中图像中只保留打印喷涂的图像。e) According to the result of the selection and optimization, draw the selected image, in which only the printed and sprayed image is retained in the image.
优选地,如果打印喷头的图像暗淡,则启动另一种优化方法。Preferably, if the image of the print head is dim, another optimization method is initiated.
根据本发明的实施方案,所述预定的阈值范围如下:According to an embodiment of the present invention, the predetermined threshold range is as follows:
Figure PCTCN2020090093-appb-000001
Figure PCTCN2020090093-appb-000001
根据本发明的实施方案,还提供一种快速多曝光融合方法,其示例性的流程如图5所示。优选地,所述快速多曝光融合方法包括:在相机运行的过程中,持续采集图像,并不断计算图像的平均亮度,如果低于设定的亮度值,就启动快速多曝光融合方法对图像进行优化。According to the embodiment of the present invention, a fast multi-exposure fusion method is also provided, and an exemplary process thereof is shown in FIG. 5. Preferably, the fast multi-exposure fusion method includes: continuously collecting images during the operation of the camera, and continuously calculating the average brightness of the image, if it is lower than the set brightness value, start the fast multi-exposure fusion method to perform the image processing. optimization.
根据本发明的实施方案,所述快速多曝光融合方法包括如下步骤:According to an embodiment of the present invention, the rapid multi-exposure fusion method includes the following steps:
a)把输入图像转化为灰度图,其中相邻帧图像运用gamma校正进行不同程度的初始校正,对这些进行高低通滤波;a) Convert the input image into a grayscale image, in which adjacent frame images are used for gamma correction for different degrees of initial correction, and high and low pass filtering are performed on these;
b)获取这些图像中每个像素亮度最大的值,作为局部对比度权重;b) Obtain the maximum brightness value of each pixel in these images as the local contrast weight;
c)运用判别方法,对这些灰度图像进行亮度判断,亮度阈值为30,在[30,255-30]之间认为是合理的亮度区间,该区间的设定为1,其余的设定为0,从而求取曝光权重图;c) Use the discrimination method to judge the brightness of these grayscale images. The brightness threshold is 30, which is considered a reasonable brightness interval between [30,255-30]. The interval is set to 1, and the rest are set to 0. In order to obtain the exposure weight map;
d)对输入的图像进行直方图均衡化处理,再运用中值滤波方法,获取初始颜色权重图,之后再运用膨胀和腐蚀操作,求取最终的颜色权重图;d) Perform histogram equalization processing on the input image, and then use the median filter method to obtain the initial color weight map, and then use the expansion and corrosion operations to obtain the final color weight map;
e)曝光权重图和颜色权重图相乘,再对二者结果进行归一化处理,归一化的结果再与局部对比度权重相乘,获得初始的融合权重,之后运用递归滤波方法对初始融合权重进行滤波,获得最终的融合权重;e) Multiply the exposure weight map and the color weight map, then normalize the two results, and then multiply the normalized result with the local contrast weight to obtain the initial fusion weight, and then use the recursive filtering method to perform the initial fusion The weight is filtered to obtain the final fusion weight;
f)根据该融合权重,融合输入的图像,从而优化图像。f) According to the fusion weight, the input image is fused to optimize the image.
根据本发明的实施方案,还提供一种对打印喷头进行目标跟踪,特别是对运动中的打印喷头进行目标跟踪的方法,其包括使用相关滤波目标跟踪算法(KCF)进行跟踪。According to an embodiment of the present invention, there is also provided a method for target tracking of print nozzles, particularly target tracking of print nozzles in motion, which includes tracking using a correlation filtering target tracking algorithm (KCF).
根据本发明的实施方案,所述相关滤波目标跟踪算法中,首先对选定的ROI区域周围的多个区域提取Hog特征,再用循环矩阵进行求解下一帧选定的ROI区域。According to an embodiment of the present invention, in the correlation filtering target tracking algorithm, the Hog feature is first extracted from multiple regions around the selected ROI area, and then the circulant matrix is used to solve the ROI area selected in the next frame.
根据本发明的实施方案,在获得一个新的选定的ROI区域时,首先,我们对该区域的图像采用自适应边界限制的快速最小二乘法滤波处理,以有效保持物体的边缘不受破坏,且其余的非边缘区域得到平滑。According to the embodiment of the present invention, when a new selected ROI area is obtained, firstly, we adopt the fast least squares filtering process with adaptive boundary limitation to the image of the area to effectively keep the edge of the object from being damaged. And the remaining non-edge areas are smoothed.
根据本发明的实施方案,边界限制通过容差机制自适应调节图像边界的区域,对图像进一步规整。According to the embodiment of the present invention, the boundary limitation adaptively adjusts the area of the image boundary through a tolerance mechanism to further regularize the image.
根据本发明的实施方案,所述相关滤波目标跟踪算法的方法首先提出一种有效的替代方案,来寻求定义在加权L2范数(1)上的目标函数的解,包括将目标函数分解为每个空间维度,并使用1维快速求解方法求解矩阵;然后,将该方法扩展到更一般的情况,通过求解加权范数L r(0<r<2)上定义的目标函数或使用在现有EP滤波器,该滤波器中不能实现的聚集数据项。 According to the embodiment of the present invention, the method of the related filtering target tracking algorithm first proposes an effective alternative to seek the solution of the objective function defined on the weighted L2 norm (1), including decomposing the objective function into each One space dimension, and use the 1-dimensional fast solving method to solve the matrix; then, the method is extended to a more general case, by solving the objective function defined on the weighted norm L r (0<r<2) or using the existing EP filter, aggregate data items that cannot be realized in this filter.
根据本发明的实施方案,运用K-means方法进行分类处理。优选地,可以将选定的ROI区域分成3类,以区别喷头末端、打印盘面和打印物质。更优选地,打印喷头的类别是K-means函数分类处理的第二类,故根据本发明只提取第二类,其余的分类设置为白色。According to the embodiment of the present invention, the K-means method is used for classification processing. Preferably, the selected ROI area can be divided into 3 categories to distinguish the end of the nozzle, the printing plate surface and the printing material. More preferably, the category of the print head is the second category of the K-means function classification process, so according to the present invention, only the second category is extracted, and the remaining categories are set to white.
根据本发明的实施方案,获得第二个分类后的图像以改善屏蔽掉噪声干扰,并采用Canny检测的方法就能较为有效的获得打印喷头末端的边缘。According to the embodiment of the present invention, the second classified image is obtained to improve the shielding of noise interference, and the Canny detection method can be used to effectively obtain the edge of the end of the print nozzle.
根据本发明的实施方案,对于提取的较完整的边缘图像,将这些边缘点作为Hough直线检测的数据点,进行Hough直线检测。According to the embodiment of the present invention, for the extracted relatively complete edge image, these edge points are used as the data points of the Hough line detection, and the Hough line detection is performed.
根据本发明的实施方案,优选地,HoughlinesP函数对直线的检测中,两直线之间的阈值设置为10。According to the embodiment of the present invention, preferably, in the detection of the straight line by the HoughlinesP function, the threshold between the two straight lines is set to 10.
根据本发明的实施方案,优选地,Hough直线检测过程中的拟合条数设定为3条以下。According to the embodiment of the present invention, preferably, the number of fitting pieces in the Hough straight line detection process is set to 3 or less.
根据本发明的实施方案,在求解获得三维坐标过程中,实时判断坐标点的位置与打印机开启的实时状况。当检测到打印喷头的末端点位于拟合直线的下方时,并且确定打印机还在 运动,则说明该打印还在工作,该位置即为实际求取的位置。当检测到打印喷头的末端点位于拟合直线的上方或者外侧时,不管打印机是否在工作,则说明该打印过程趋近于停止,该立即停止检测,删掉该位置。根据本发明的实施方案,当转换相机,使得打印喷头在运动过程中出现对面相机无法构成双目以检测出末端位置时,重新获取打印喷头的方向和位置,重新选择正对喷头方向的两个相机,再次运用CNN模型训练的方法获取初始ROI区域,之后采用所述跟踪算法和检测算法,实时获取喷头的三维位置,反馈调节打印过程,直至结束。According to the embodiment of the present invention, in the process of obtaining the three-dimensional coordinates, the position of the coordinate point and the real-time status of the printer being turned on are judged in real time. When it is detected that the end point of the print nozzle is below the fitting straight line, and it is determined that the printer is still moving, it means that the printing is still working, and this position is the actual position obtained. When it is detected that the end point of the print nozzle is located above or outside the fitting straight line, no matter whether the printer is working or not, it means that the printing process is approaching to stop. The detection should be stopped immediately and the position should be deleted. According to the embodiment of the present invention, when the camera is switched so that the opposite camera cannot form a binocular to detect the end position during the movement of the print nozzle, the direction and position of the print nozzle are reacquired, and the two opposite directions are reselected. The camera again uses the CNN model training method to obtain the initial ROI area, and then uses the tracking algorithm and detection algorithm to obtain the three-dimensional position of the nozzle in real time, feedback and adjusts the printing process until the end.
根据本发明的实施方案,在步骤1)之前,优选进行如下训练步骤中的至少一个步骤:According to the embodiment of the present invention, before step 1), at least one of the following training steps is preferably performed:
a)输入三个以上的复杂打印模型(例如曲面的变换和种类较多的模型,优选尽可能多地包含打印过程中的所有曲面形式的模型);a) Input more than three complex printing models (for example, surface transformations and models with more types, preferably including as many models as possible in the form of all surfaces in the printing process);
b)在不通气和不加入打印材料的情况下,使打印机运行;b) Run the printer without ventilation and without adding printing materials;
c)通过相机采集打印喷头的视频和/或图像,直至打印结束;c) Collect the video and/or image of the print head through the camera until the end of printing;
d)标记采集的视频和/或图像中喷头的区域,作为训练样本;d) Mark the area of the nozzle in the collected video and/or image as a training sample;
e)在训练样本的基础上,构建CNN训练网络进行训练,获取训练结果;e) On the basis of training samples, construct a CNN training network for training and obtain training results;
f)根据训练结果确定初选框。f) Determine the primary selection box according to the training results.
根据本发明的实施方案,步骤d)中,可以将采集的视频转化为图像,标记图像中喷头的区域,作为训练样本。According to the embodiment of the present invention, in step d), the collected video can be converted into an image, and the area of the nozzle in the image can be marked as a training sample.
有益效果Beneficial effect
发明人发现,现有3D打印方法应用于人工骨支架材料的精度问题的原因在于采用的三坐标打印方法中,打印开始时输入的模型可能存在偏差,在打印过程中由于缺乏图像预优化环节和实时监测矫正的步骤,导致设备精度大为降低,不能满足更高质量的需要。并且,由于机械臂末端在打印结束后会有停止回撤的时候,此时末端点在轮廓曲线的上方,因此,需要实时判断喷头末端点的位置,以确定真实的打印喷头的实际点。而且开启3D打印机后,计算机首先接收到机械臂末端的理想位置。由于打印喷头的尺寸、喷头与相机之间的距离的关系以及采集图像质量这三个方面的影响,无法以该点为中心,相关的区域设定为初始的ROI区域。The inventor found that the reason for the accuracy problem of the existing 3D printing method applied to artificial bone scaffold material is that in the three-coordinate printing method adopted, the input model at the beginning of printing may have deviations. During the printing process, due to the lack of image pre-optimization links and The real-time monitoring of the correction steps has caused the equipment accuracy to be greatly reduced, which cannot meet the needs of higher quality. In addition, since the end of the robot arm will stop retreating after printing, the end point is above the contour curve. Therefore, it is necessary to determine the position of the end point of the print head in real time to determine the actual point of the real print head. And after turning on the 3D printer, the computer first receives the ideal position of the end of the robotic arm. Due to the influence of the size of the print head, the distance between the print head and the camera, and the quality of the captured image, it is impossible to set this point as the center, and the relevant area is set as the initial ROI area.
因此本发明中采用深度学习的方法,获取初始目标区域。在此过程中,考虑到外界环境光对3D打印图像采集的影响,提出了一种快速多曝光融合方法,该方法借助多曝光融合的优点,有效的调节不均匀环境光对采集图像的干扰影响,对提高图像的训练精度和最终获取ROI区域的准确度有较大的帮助。并且,本发明基于预优化3D打印的反馈过 程中的监测方法能有效提高打印的图像质量和机械臂打印末端喷头的位置精度,解决了因缺少反馈***和图像采集,导致3D打印过程易受外界环境对喷头末端物质的干扰问题,使打印喷头末端的位置信息能实时准确有效的检测出来,为反馈修正提供可靠的信息,实时纠正3D打印的轨迹。Therefore, the method of deep learning is adopted in the present invention to obtain the initial target area. In this process, considering the influence of external ambient light on 3D printed image acquisition, a fast multi-exposure fusion method is proposed. This method uses the advantages of multi-exposure fusion to effectively adjust the interference influence of uneven ambient light on the acquired image. It is of great help to improve the training accuracy of the image and the accuracy of the final ROI area. In addition, the monitoring method of the present invention based on the feedback process of pre-optimized 3D printing can effectively improve the quality of the printed image and the position accuracy of the nozzle at the end of the robot arm printing, and solves the problem of the lack of feedback system and image collection, which makes the 3D printing process vulnerable to the outside world. The environment's interference with the material at the end of the print head enables the position information of the end of the print head to be accurately and effectively detected in real time, provide reliable information for feedback correction, and correct the trajectory of 3D printing in real time.
此外,本发明方法的灵活性和效率实现了一系列应用程序的显着加速,这些应用程序通常需要解决大型线性***。In addition, the flexibility and efficiency of the method of the present invention enable a significant acceleration of a series of applications, which usually need to solve large linear systems.
附图说明Description of the drawings
图1为本发明3D打印装置的示意图。Fig. 1 is a schematic diagram of the 3D printing device of the present invention.
图2为初始ROI区域获取算法流程图。Figure 2 is a flowchart of the algorithm for acquiring the initial ROI region.
图3为颜色分割的预优化算法流程图。Figure 3 is a flow chart of the pre-optimized algorithm for color segmentation.
图4为算法的总体流程图。Figure 4 shows the overall flow chart of the algorithm.
图5为快速多曝光融合方法流程图。Figure 5 is a flow chart of a fast multi-exposure fusion method.
图6为跟踪子过程检测图,其中,a:喷头跟踪位置;b:K-means检测图;c:边缘检测图;d:Hough直线检测图;e:末端点定位图。Figure 6 is a detection diagram of the tracking sub-process, where a: nozzle tracking position; b: K-means detection diagram; c: edge detection diagram; d: Hough straight line detection diagram; e: end point positioning diagram.
图7为求取Hough直线交点的示意图。Figure 7 is a schematic diagram of finding the intersection of Hough lines.
图8为误差补偿的示意图。Figure 8 is a schematic diagram of error compensation.
图9为混合型打印路径示意图。Figure 9 is a schematic diagram of a hybrid printing path.
图10为自由曲面模型及其点云的示意图。Figure 10 is a schematic diagram of a free-form surface model and its point cloud.
图11为截面及整体点云拟合示意图。Figure 11 is a schematic diagram of the cross-section and the overall point cloud fitting.
图12为点云三角剖分及法向量计算结果Figure 12 shows the point cloud triangulation and normal vector calculation results
图13为Euler角计算示意图。Figure 13 is a schematic diagram of Euler angle calculation.
图14为平面打印样品照片。Figure 14 is a photo of a flat print sample.
图15为曲面涂层效果照片。Figure 15 is a photo of the surface coating effect.
图16为实施例2的检测方法。Figure 16 shows the detection method of Example 2.
具体实施方式Detailed ways
下文将结合具体实施例对本发明的技术方案做更进一步的详细说明。应当理解,下列实施例仅为示例性地说明和解释本发明,而不应被解释为对本发明保护范围的限制。凡基于本发明上述内容所实现的技术均涵盖在本发明旨在保护的范围内。The technical solution of the present invention will be further described in detail below in conjunction with specific embodiments. It should be understood that the following examples are only illustrative and explaining the present invention, and should not be construed as limiting the protection scope of the present invention. All technologies implemented based on the foregoing contents of the present invention are covered by the scope of the present invention.
除非另有说明,以下实施例中使用的原料和试剂均为市售商品,或者可以通过已知方法 制备。Unless otherwise specified, the raw materials and reagents used in the following examples are all commercially available products or can be prepared by known methods.
下文仪器和原材料和装置的来源和规格如下:The sources and specifications of the following instruments and raw materials and devices are as follows:
机器人:优傲UR3机器人;Robot: Universal Robot UR3;
视觉***:迈德威视工业相机,AVT GX6600B,镜头:BT-F036。Vision system: Medvision industrial camera, AVT GX6600B, lens: BT-F036.
实施例1Example 1
1.原材料的调配1. The allocation of raw materials
a)首先用电子秤,称取羟基磷灰石粉末6g,倒入大烧杯中;a) First use an electronic scale to weigh 6g of hydroxyapatite powder and pour it into a large beaker;
b)其次,用量筒量取28毫升水,进行混合;把混合的烧杯放入超声混合器中,对混合物进行超声混合。当混合物成了浆,停止混合气器;b) Secondly, measure 28 ml of water with a graduated cylinder and mix; put the mixed beaker into an ultrasonic mixer, and ultrasonically mix the mixture. When the mixture becomes a slurry, stop the mixer;
c)取出装有混合物的烧杯,用电子秤称取4g海藻酸钠,倒入烧杯,进行再次混合。c) Take out the beaker containing the mixture, weigh 4 g of sodium alginate with an electronic scale, pour it into the beaker, and mix again.
d)将混合浆体用漏斗灌入打印喷头内备用。d) Use a funnel to pour the mixed slurry into the print nozzle for use.
2.硬件***的安装2. Hardware system installation
如图1所示,3D打印***主要包括六轴机械臂、四轴联动打印平台、打印喷头及视觉跟踪定位模块等主要组成部分。利用六轴机械臂的运动实现复杂微细的物体表面喷涂打印,并通过多目相机对打印喷头末端进行跟踪定位,结合四轴联动打印平台进行打印补偿运动,实现在生物假体表面上的高精度3D打印。3D打印***外部采用铝合金支架,墙壁采用比较轻便的PC压缩板。As shown in Figure 1, the 3D printing system mainly includes a six-axis robotic arm, a four-axis linkage printing platform, a print head, and a visual tracking positioning module. The movement of the six-axis robotic arm is used to achieve spray printing on the surface of complex and fine objects, and the end of the print nozzle is tracked and positioned by a multi-eye camera, combined with the four-axis linkage printing platform for printing compensation movement, to achieve high precision on the surface of the bioprosthesis 3D printing. The exterior of the 3D printing system uses aluminum alloy brackets, and the walls use relatively lightweight PC compression panels.
***采用的六轴机械臂具有六个空间自由度,运动灵活性高,能够实现复杂曲面空间的精确定位,打印喷头安装在六轴机械臂的末端,由六轴机械臂控制打印喷头在生物假体表面的三维图案化打印。打印喷头的出料采用电子调压阀进行稳压控制,保证出料的均匀性。The six-axis robotic arm used in the system has six spatial degrees of freedom, high mobility, and can achieve precise positioning in complex curved spaces. The print nozzle is installed at the end of the six-axis robotic arm. The six-axis robotic arm controls the print Three-dimensional pattern printing of the body surface. The discharge of the print head is controlled by an electronic pressure regulator to ensure the uniformity of the discharge.
2.1.四轴联动平台设计2.1. Four-axis linkage platform design
打印平台为四轴联动平台,具有四个运动自由度,包括X,Y,Y三个方向的直线运动及Z方向的转动运动,能够同时进行运动。四轴联动平台由三个直线模组及一个高精度转台组成,通过在空间的三维运动及Z轴方向的旋转,配合六轴机械臂控制喷头的运动定位,进行打印位置的调整,实现复杂曲面的图案化打印。The printing platform is a four-axis linkage platform with four degrees of freedom of movement, including linear movement in the three directions of X, Y, and Y and rotational movement in the Z direction, which can move at the same time. The four-axis linkage platform consists of three linear modules and a high-precision turntable. Through the three-dimensional movement in space and the rotation in the Z-axis direction, the six-axis mechanical arm controls the movement and positioning of the print head, adjusts the printing position, and realizes complex curved surfaces. Patterned printing.
2.2.多目视觉的硬件设计2.2. Hardware design of multi-eye vision
视觉硬件解决方案主要功能是用于确定针头的三维空间位置,实现打印针头测量、位置自动校正以及针尖与针痕的中心测量。根据***功能需求以及精度要求,设计出了 视觉测量***的检测方案。本项目将采用的多视角相机***,光源采用对边,双排LED设计,该灯可以根据需求遥控调节亮度。The main function of the vision hardware solution is to determine the three-dimensional space position of the needle, realize the measurement of the printing needle, the automatic position correction, and the center measurement of the needle tip and the needle mark. According to the functional requirements and accuracy requirements of the system, a detection scheme for the vision measurement system was designed. The multi-view camera system to be used in this project, the light source adopts the opposite side, double-row LED design, the light can be remotely adjusted according to needs.
该视觉***由两套双目***构成,分列于四周,视实际情况可以动态地增加双目***。平行双目***测量方法具有效率高、精度合适、***结构简单、成本低等优点,非常适合于制造现场的在线、非接触产品检测和质量控制。对运动物体(包括动物和人体形体)测量中,由于图像获取是在瞬间完成的,因此平行双目***是一种更有效的测量方法。由于机械臂在运动过程中存在遮挡的情况,故需要两套视觉***用以保证探针能够检测到。同时多个相机***提供了更多的数据,能够更为精确地确定探针的位置。The vision system is composed of two sets of binocular systems, which are arranged around each other, and the binocular system can be dynamically added depending on the actual situation. The parallel binocular system measurement method has the advantages of high efficiency, appropriate accuracy, simple system structure, and low cost. It is very suitable for online, non-contact product inspection and quality control at the manufacturing site. In the measurement of moving objects (including animals and human bodies), because the image acquisition is completed in an instant, the parallel binocular system is a more effective measurement method. Since the robotic arm is blocked during the movement, two sets of vision systems are required to ensure that the probe can detect it. At the same time, multiple camera systems provide more data and can more accurately determine the position of the probe.
当机械臂检测的算法开启,检测机械臂的倾斜方向,并启动悬挂在机械臂周围且正对于倾斜方向的两个相机,组成双目视觉***,其中相机正对于机械臂的倾斜方向由机械臂末端反馈的姿态信息确定。When the robotic arm detection algorithm is turned on, it detects the tilt direction of the robotic arm, and activates two cameras hanging around the robotic arm and facing the tilt direction to form a binocular vision system, where the camera is facing the tilt direction of the robotic arm by the robotic arm The posture information fed back at the end is determined.
通过该双目视觉***,运用颜色分割的预优化方法和快速多曝光融合方法,调节图像的质量。其中,所述颜色分割的预优化方法包括如下步骤:Through the binocular vision system, the pre-optimization method of color segmentation and the fast multi-exposure fusion method are used to adjust the quality of the image. Wherein, the pre-optimization method for color segmentation includes the following steps:
f)对输入的彩色图通过颜色转换函数,转到HSV颜色空间;f) The input color image is transferred to the HSV color space through the color conversion function;
g)在H、S和V空间分别按照预定的阈值范围进行颜色阈值的比对,实验中打印的喷头颜色为选自红色、紫色、绿色、青色和蓝色的五种;g) Compare the color thresholds in the H, S, and V spaces respectively according to the predetermined threshold range. The colors of the nozzles printed in the experiment are five kinds selected from red, purple, green, cyan and blue;
当颜色处于该预定的阈值范围内时候,选择为有效值;如果不在该范围,则去掉该值;When the color is within the predetermined threshold range, select the effective value; if it is not within the range, remove the value;
h)对获取的图像进行平滑优化处理,此处选择中值滤波,去掉单点的噪声;h) Smoothing and optimizing the acquired image. Here, median filtering is selected to remove single point noise;
i)对平滑处理后的图像进行轮廓提取,即绘制每个独立物体的外接矩形,通过长宽比和面积来去掉多余的无关矩形内的目标;i) Perform contour extraction on the smoothed image, that is, draw the circumscribed rectangle of each independent object, and remove the redundant target in the irrelevant rectangle through the aspect ratio and area;
j)根据选择优化的结果,绘制选择后的图像,其中图中只剩下打印喷涂的图像。j) According to the result of the selection and optimization, draw the selected image, in which only the sprayed image is left in the figure.
如果打印喷头的图像暗淡,则启动另一种优化方法。If the image of the print head is dim, start another optimization method.
所述预定的阈值范围如下:The predetermined threshold range is as follows:
Figure PCTCN2020090093-appb-000002
Figure PCTCN2020090093-appb-000002
3.打印机的控制与调节3. Printer control and adjustment
本项目3D打印喷头末端视觉跟踪处理算法包括跟踪定位模块、喷头提取模块以及末端 三维点检测模块。The visual tracking processing algorithm for the end of the 3D printing nozzle of this project includes a tracking and positioning module, a nozzle extraction module, and a terminal 3D point detection module.
高精度视觉测量跟踪是在结合机器视觉与自动化技术的基础上对特定运动目标进行检测识别与跟踪,并测得其三维坐标信息的技术。首先,通过架设在四周的高精度相机由机械臂末端的位置信息自主选择最优的2个,组建双目立体视觉;其次,采用基于预设方法和相关滤波方法相结合的目标跟踪模型对打印机喷头末端进行有效的识别和跟踪;最后,采用霍夫直线检测方法对打印喷头的末端点进行提取,根据视差原理,计算出末端点的位置。High-precision visual measurement and tracking is a technology that detects, recognizes and tracks specific moving targets based on the combination of machine vision and automation technology, and measures its three-dimensional coordinate information. Firstly, the best two are selected independently from the position information of the end of the robotic arm through the high-precision cameras set up all around to form binocular stereo vision; secondly, the target tracking model based on the combination of preset methods and related filtering methods is used to control the printer The end of the print head is effectively identified and tracked; finally, the Hough line detection method is used to extract the end point of the print head, and the position of the end point is calculated according to the principle of parallax.
视觉设备获取实际打印的末端点后,与输入计算机模型的实际三维点进行比对。如果该点的误差在一定的阈值范围内,四轴联动平台就不启动;如果超过误差的阈值,则根据误差的级别,启动相应的补偿。After the visual equipment obtains the actual printed end point, it compares it with the actual three-dimensional point input into the computer model. If the error at this point is within a certain threshold, the four-axis linkage platform will not start; if it exceeds the error threshold, the corresponding compensation will be initiated according to the error level.
3.1 末端点的获取过程3.1 The process of obtaining end points
3D打印喷头末端点的获取主要分为目标初始位置的定位方法,目标跟踪方法,目标提取算法和末端点提取方法。首先开启光源,计算机和视觉设备;其次,根据目前的视觉设备采集的图像判断环境的亮度,如果亮度在我们设置的阈值范围内,则不启动快速多曝光融合方法;如果超过阈值,则对输入的图像采用快速多曝光融合方法进行优化,调节亮度。The acquisition of the end point of the 3D printing nozzle is mainly divided into the positioning method of the initial position of the target, the target tracking method, the target extraction algorithm and the end point extraction method. First turn on the light source, computer and vision equipment; secondly, judge the brightness of the environment based on the images collected by the current vision equipment. If the brightness is within the threshold range we set, then the fast multiple exposure fusion method will not be started; if the threshold is exceeded, the input The image is optimized by fast multi-exposure fusion method to adjust brightness.
其中,快速多曝光融合方法的流程如图5所示,包括如下步骤:Among them, the process of the rapid multi-exposure fusion method is shown in Figure 5, including the following steps:
a)把输入图像转化为灰度图,其中相邻帧图像运用gamma校正进行不同程度的初始校正,对这些进行高低通滤波;a) Convert the input image into a grayscale image, in which adjacent frame images are used for gamma correction for different degrees of initial correction, and high and low pass filtering are performed on these;
b)获取这些图像中每个像素亮度最大的值,作为局部对比度权重;b) Obtain the maximum brightness value of each pixel in these images as the local contrast weight;
c)运用判别方法,对这些灰度图像进行亮度判断,亮度阈值为30,在[30,255-30]之间认为是合理的亮度区间,该区间的设定为1,其余的设定为0,从而求取曝光权重图;c) Use the discrimination method to judge the brightness of these grayscale images. The brightness threshold is 30, which is considered a reasonable brightness interval between [30,255-30]. The interval is set to 1, and the rest are set to 0. In order to obtain the exposure weight map;
d)对输入的图像进行直方图均衡化处理,再运用中值滤波方法,获取初始颜色权重图,之后再运用膨胀和腐蚀操作,求取最终的颜色权重图;d) Perform histogram equalization processing on the input image, and then use the median filter method to obtain the initial color weight map, and then use the expansion and corrosion operations to obtain the final color weight map;
e)曝光权重图和颜色权重图相乘,再对二者结果进行归一化处理,归一化的结果再与局部对比度权重相乘,获得初始的融合权重,之后运用递归滤波方法对初始融合权重进行滤波,获得最终的融合权重;e) Multiply the exposure weight map and the color weight map, then normalize the two results, and then multiply the normalized result with the local contrast weight to obtain the initial fusion weight, and then use the recursive filtering method to perform the initial fusion The weight is filtered to obtain the final fusion weight;
f)根据该融合权重,融合输入的图像,从而优化图像。f) According to the fusion weight, the input image is fused to optimize the image.
3.1.1 目标初始位置定位3.1.1 Target initial location positioning
3D打印喷头末端的跟踪定位是整个视觉检测最为重要的步骤,是进行其他视觉检测的前提,主要功能是将打印喷头从整个视场中分离出来,成为一个独立的处理单元,进 行后续的模块的检测。该步骤中,首先根据CNN(Convolutional Neural Network)模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域。The tracking and positioning of the end of the 3D printing nozzle is the most important step in the entire visual inspection. It is the prerequisite for other visual inspections. The main function is to separate the printing nozzle from the entire field of view and become an independent processing unit for subsequent modules. Detection. In this step, first train samples of four cameras according to the CNN (Convolutional Neural Network) model, judge and recognize the target object, and mark the primary ROI area where the target object is located.
3.1.2 目标跟踪定位方法3.1.2 Target tracking and positioning method
打印喷头末端的跟踪主要目的是有效的提取末端点的三维信息。针对喷头颜色种类比较多,变化比较大,传统的跟踪算法很难满足项目的需要,因此我们采用判别式模型,运用经典的相关滤波类跟踪算法。在本项目中,相关滤波跟踪主要分为三步:首先.在I t帧中,在当前位置P t附近采样,训练一个回归器。这个回归器能计算一个小窗口采样的响应。其次,在I t+1帧中,在前一帧位置P t附近采样,用前述回归器判断每个采样的响应。最后,响应最强的采样作为本帧位置P t+1The main purpose of tracking the end of the print head is to effectively extract the three-dimensional information of the end point. In view of the large variety of nozzle colors and large changes, traditional tracking algorithms are difficult to meet the needs of the project, so we adopt a discriminant model and use classic correlation filtering tracking algorithms. In this project, the correlation tracking filter is divided into three steps: First, the frame I t in the vicinity of the current position of the sample P t, a training regressor. This regressor can calculate the response of a small window sampling. Secondly, in the It +1 frame, samples are taken near the position P t of the previous frame, and the response of each sample is judged by the aforementioned regression. Finally, the sample with the strongest response is taken as the current frame position P t+1 .
3.1.3 打印喷头的目标提取算法3.1.3 The target extraction algorithm of the print head
当目标物体处于被跟踪状态,我们就可以通过跟踪框实时获取目标物体所在的区域,进而对目标额提取。本项目中,针对打印喷头末端的跟踪,首先根据跟踪框,提取跟踪框在双目视觉中的位置信息作为初始位置点;其次对跟踪框内的图像进行预处理。首先对目标区域采用自适应边界限制的快速最小二乘法滤波处理,平滑输入的目标区域。由于打印喷头末端的图像在整个图像中对比度比较大,因此采用经典的K-Means(kmeans)算法进行分类处理,提取图中最大类的那块图像。经过K-Means分类处理,喷头末端被有效地分割出来,如图6所示。When the target object is being tracked, we can obtain the area where the target object is located in real time through the tracking frame, and then extract the target amount. In this project, for the tracking of the end of the print head, first extract the position information of the tracking frame in the binocular vision according to the tracking frame as the initial position point; secondly, preprocess the image in the tracking frame. First, the target area is filtered by fast least squares method with adaptive boundary limitation to smooth the input target area. Since the image at the end of the print head has a relatively large contrast in the entire image, the classic K-Means (kmeans) algorithm is used for classification processing to extract the image of the largest class in the figure. After K-Means classification processing, the end of the nozzle is effectively segmented, as shown in Figure 6.
3.1.4 末端三维点的求取算法3.1.4 Algorithm for obtaining 3D points at the end
三维坐标点的求取,首先对获取的末端目标进行灰度化处理;其次,阈值化处理目标物体的灰度图,并进行平滑处理;之后,对平滑后的图像进行边缘检测;再次,Hough直线检测,求取交汇点;最后,喷头末端三维坐标的获取。To obtain the three-dimensional coordinate points, firstly perform grayscale processing on the acquired end target; secondly, threshold the grayscale image of the target object and perform smoothing processing; then, perform edge detection on the smoothed image; again, Hough Straight line detection to obtain the intersection point; finally, the three-dimensional coordinates of the end of the nozzle are obtained.
由于边缘不是很光滑,检测的直线的条数较多,但研究发现HoughLinesP数对直线的检测中,当累计的阈值设置为30,两直线之间的阈值设置为10时,能较准确的提取喷头两边的直线。在Hough直线检测过程中我们把拟合条数最多设定为3条,其中存在直线拟合条数不一致的问题,主要表现在以下方面:所有的拟合直线在同一侧,有2条在同一侧,有一条处于垂直方向。当所有的拟合直线在同一侧,我们取实际打印的值与打印喷头外轮廓最低点的位置点的平均值作为交点。当有2条在同一侧,我们取最接近外轮廓的那条拟合直线与另外一边的拟合直线,进行交汇,求取交点。当有一条处于垂直方向,我们取另外一边没有垂直的拟合直线,用该直线与垂直直线的交点作为交汇点。这些交汇点即为我们要跟踪定位的位置。Because the edges are not very smooth, the number of detected lines is large, but the study found that in the detection of HoughLinesP numbers for straight lines, when the cumulative threshold is set to 30 and the threshold between the two lines is set to 10, the extraction can be more accurate The straight lines on both sides of the nozzle. In the Hough line detection process, we set the number of fittings to 3 at most. Among them, there is the problem of inconsistency in the number of fittings of the straight line, which is mainly manifested in the following aspects: all the fitted straight lines are on the same side, and 2 are on the same side. On the side, one is in the vertical direction. When all the fitted straight lines are on the same side, we take the average value of the actual printed value and the lowest point of the outer contour of the print head as the point of intersection. When there are two on the same side, we take the fitted straight line closest to the outer contour and the fitted straight line on the other side to intersect and find the intersection point. When one is in the vertical direction, we take the fitting straight line that is not vertical on the other side, and use the intersection point of the straight line and the vertical straight line as the intersection point. These intersections are the locations we want to track and locate.
如图7所示,喷头末端的Hough直线条数存在不确定性,一般情况下,会出现2条(图 7a),我们设定交点为喷头末端的坐标点。但也有例外的情况,会出现>=3条的情况(图7b)。As shown in Figure 7, there is uncertainty in the number of Hough lines at the end of the nozzle. In general, there will be two (Figure 7a). We set the intersection point as the coordinate point of the end of the nozzle. But there are exceptions, there will be >= 3 cases (Figure 7b).
针对此种情况,我们首先确定喷头的外轮廓最低点位置,并与机械臂末端的方向进行比对,如果机械臂末端方向朝上,也就意味着打印结束,该点我们设置为无效点;如果机械臂末端方向依旧朝下,可以判断该机械臂依旧工作。判断方法位:取直线的斜率,在同一侧的直线,取斜率绝对值比较小的那个,交点就是我们所要求的点。如果直线的斜率都趋于无穷大(图7c),判断的方法为:交点为内侧的直线与另一侧直线交点。In view of this situation, we first determine the position of the lowest point of the outer contour of the nozzle and compare it with the direction of the end of the robot arm. If the direction of the end of the robot arm is upward, it means that the printing is over, and we set this point as an invalid point; If the end of the robotic arm is still facing downwards, it can be judged that the robotic arm is still working. Judgment method: take the slope of the straight line, the straight line on the same side, take the one with the smaller absolute value of the slope, and the intersection point is the point we require. If the slopes of the straight lines all tend to infinity (Figure 7c), the judgment method is: the intersection point is the intersection of the inner line and the other side line.
求取交点后,利用双目视觉的视差原理,获取喷头末端的三维坐标点。After obtaining the intersection point, use the parallax principle of binocular vision to obtain the three-dimensional coordinate point of the end of the nozzle.
3.2 机器人的控制3.2 Robot control
3.2.1 控制部分3.2.1 Control part
打印平台控制包括以下几个部分:The printing platform control includes the following parts:
●平台设计。采用四轴高精度步进、伺服控制的可移动式打印平台,可以实时进行打印位置误差补偿;●Platform design. The movable printing platform with four-axis high-precision stepping and servo control can be used to compensate the printing position error in real time;
●控制方案设计。控制基于PLC可编程控制器与视觉误差补偿***进行实时误差信息补偿控制;●Control scheme design. Control based on PLC programmable controller and visual error compensation system for real-time error information compensation control;
●通过PLC基于TCP通信实现机器人、视觉的控制:PLC与机器人、视觉***进行相互的数据交换与控制;●Realize the control of robot and vision through PLC based on TCP communication: PLC and robot and vision system carry out mutual data exchange and control;
●打印喷头智能控制。通过模拟量控制气压的压力对打印喷头进行打印量的调节。●Intelligent control of print head. Adjust the print volume of the print head by controlling the pressure of the air pressure by analog quantity.
控制流程为:四轴联动平台***启动后,按下复位,平台各个轴自动寻找原点,进行顺序复位轴,待轴复位完成后,按下启动***自动等待上位机视觉误差补偿***的补偿信息进行位置补偿,待补偿完成,再次获取补偿信息进行自动补偿,如此往复运行。The control process is: After the four-axis linkage platform system is started, press reset, each axis of the platform will automatically find the origin, and reset the axes in sequence. After the axis reset is completed, press the start system to automatically wait for the compensation information of the visual error compensation system of the host computer. Position compensation, after the compensation is completed, the compensation information is obtained again for automatic compensation, and it runs back and forth in this way.
电气控制***根据需求采用PLC作为主控制器,控制单元为两个伺服电机、两个步进电机、一个模拟量气压阀。PLC与机器人采用TCP通信控制,机器人到达指定打印位置时通过TCP发送指定数据到服务器,PLC获取指定气压信息及启动信号启动或关闭模拟量气压阀。视觉***进行实时的位置检测,通过计算实时的位置误差信息,进行数据整理分析,再发往服务器,同时PLC获取数据进行位置补偿,从而提高打印精度。The electrical control system adopts PLC as the main controller according to requirements, and the control unit is two servo motors, two stepping motors, and an analog pressure valve. The PLC and the robot adopt TCP communication control. When the robot reaches the designated printing position, the designated data is sent to the server through TCP, and the PLC obtains the designated air pressure information and the start signal to start or close the analog pressure valve. The vision system performs real-time position detection, by calculating real-time position error information, data sorting and analysis, and then sent to the server, while the PLC obtains the data for position compensation, thereby improving printing accuracy.
该控制***的特点主要有以下两点:The characteristics of the control system mainly have the following two points:
●多轴联动设计。打印平台采用四轴移动装置,进行自动复位及位置补偿,可以实时启动任意的一个轴进行位置补偿。●Multi-axis linkage design. The printing platform adopts a four-axis moving device for automatic reset and position compensation, and can start any axis for position compensation in real time.
●多端数据交互。PLC与视觉误差***、机器人***通过TCP标准的协议可以同时进行访问数据库信息,从而实现多端的数据信息共享。●Multi-terminal data interaction. PLC, visual error system, and robot system can access database information at the same time through TCP standard protocol, so as to realize multi-terminal data information sharing.
3.2.2 通讯模块设计3.2.2 Communication module design
当前***存在三个子***:机械臂***、视觉***、运动平台***。需要构建一个通信***使得这三个子***相互通信,保证***打印过程中的实时性及准确性。There are three subsystems in the current system: robotic arm system, vision system, and motion platform system. It is necessary to build a communication system to enable these three subsystems to communicate with each other to ensure the real-time and accuracy of the system's printing process.
基于使***通信更加有序及可控,同时让***更具扩展性的目的,采用C/S架构进行通信。该通信***主要基于.net remoting以及TCP/IP协议,编写了服务端程序及多个客户端程序。客户端可以发送消息给服务程序,服务程序可以广播信息给客户端程序。可以同时打开多个客户端进行通信。视觉子***、机械臂子***分别发送检测到的位置坐标及理想的位置坐标给服务端;服务端计算后生成坐标差值,广播给各个子***。其中,视觉子***及机械臂子***忽略这条消息,运动平台***接收这条消息,并进行位置调整。For the purpose of making the system communication more orderly and controllable, and at the same time making the system more scalable, the C/S architecture is used for communication. The communication system is mainly based on .net remoting and TCP/IP protocols, and has written server-side programs and multiple client-side programs. The client can send messages to the service program, and the service program can broadcast information to the client program. You can open multiple clients to communicate at the same time. The vision subsystem and the robotic arm subsystem respectively send the detected position coordinates and the ideal position coordinates to the server; the server generates the coordinate difference after calculation and broadcasts it to each subsystem. Among them, the vision subsystem and the robotic arm subsystem ignore this message, and the motion platform system receives this message and adjusts the position.
***的整体打印通信流程如下所示:The overall printing communication process of the system is as follows:
(1)首先,四个相机通过采集卡采集图片,再调用针尖检测算法提取各个图片的针尖坐标位置(包含时间信息)。然后将这些针尖坐标位置存储到“采集点数据库”。(1) First, the four cameras collect pictures through the capture card, and then call the needle point detection algorithm to extract the needle point coordinate position (including time information) of each picture. Then the coordinate positions of these needle tips are stored in the "collection point database".
(2)机器人***通过Remoting通信协议,将机械臂的运动坐标传送到视觉***,并存储到运动轨迹数据表中。针尖检测算法也可以使用运动轨迹数据表信息。(2) The robot system transmits the motion coordinates of the robot arm to the vision system through the Remoting communication protocol, and stores it in the motion trajectory data table. The needle tip detection algorithm can also use the motion trajectory data table information.
(3)设定一个定时器,每隔一个时间段提取采集点数据库中的数据并调用相机标定程序计算出相机观察到的坐标点。随后与运动轨迹数据表中的数据进行对比生成PLC运动控制参数,通过Remoting通信协议传送到PLC运动***中。(3) Set a timer, extract the data in the collection point database every other time period and call the camera calibration program to calculate the coordinate points observed by the camera. Then it is compared with the data in the motion trajectory data table to generate PLC motion control parameters, which are transmitted to the PLC motion system through the Remoting communication protocol.
(4)重复(1)(2)(3)直至打印结束。(4) Repeat (1)(2)(3) until the end of printing.
4. 3D打印实验4. 3D printing experiment
打印执行机构为末端加装针管的UR3机械臂,打印时通过加压挤出材料实现喷墨打印。打印的材料位于四轴联动平台上,打印末端点的位置识别由多目立体视觉测量得到。The printing actuator is a UR3 robotic arm with a needle tube at the end, and inkjet printing is achieved by pressing and extruding the material during printing. The printed material is located on the four-axis linkage platform, and the position identification of the printing end point is obtained by multi-eye stereo vision measurement.
根据不同的3D打印要求,打印模型预处理分为两种不同的流程。对于堆积成型3D打印,模型数据的处理流程是切片后进行单片路径规划,最终形成完整的打印路径;对于自由曲面涂层3D打印,模型数据的处理流程是先求出曲面上的路径点,然后对路径点进行三角剖分,求出每个路径点的法向量,再根据法向量计算控制机械臂的姿态参数,最后结合路径点的位置形成自由曲面喷涂路径的控制文件。According to different 3D printing requirements, the preprocessing of the print model is divided into two different processes. For stacking 3D printing, the processing flow of model data is to plan a single-chip path after slicing, and finally form a complete printing path; for free-form surface coating 3D printing, the processing flow of model data is to first find the path points on the surface. Then triangulate the path points to find the normal vector of each path point, and then calculate the attitude parameters of the control manipulator based on the normal vector, and finally combine the position of the path point to form the control file of the free-form surface spraying path.
4.1 平面堆积成型打印4.1 Plane stacking and forming printing
对于平面内堆积成型打印,首先把打印模型规划的路径点转换到UR3的世界坐标下,同时固定喷头姿态、设置速度和加速度,以直线移动控制UR3(MoveL)实现平面打印。For in-plane accumulation printing, first convert the path points planned by the print model to the world coordinates of UR3, and at the same time fix the nozzle posture, set the speed and acceleration, and control UR3 (MoveL) to achieve planar printing with linear movement.
进行实际打印测试时,为了保证打印的连续性与行进速度,一般将同一直线上相同的点 进行简化处理。但是打印过程中发现UR3机械臂行进速度较快时会有抖动现象,造成直线运动时实际路径并不是直线,而是类似正弦波的无规则曲线,使得打印精度较低。而机械臂本身通过位置控制其行进路径,在未到达指定位置时无法反馈控制其位置,因此误差难以消除。而达到指定位置发现有误差,需要再发送一次指令控制其运动到原先的指定位置,这样的控制流程将造成打印过程不连续,严重影响其打印速度,也将对打印表面的均匀度产生严重影响。In the actual printing test, in order to ensure the continuity and travel speed of printing, generally the same points on the same straight line are simplified. However, during the printing process, it was found that the UR3 robot arm would jitter when the moving speed was fast, which caused the actual path to be not a straight line when moving in a straight line, but an irregular curve similar to a sine wave, resulting in lower printing accuracy. The robot arm itself controls its travel path through position, and cannot feedback control its position when it does not reach the specified position, so the error is difficult to eliminate. When an error is found when reaching the designated position, it needs to send another instruction to control its movement to the original designated position. Such a control process will cause discontinuity in the printing process, seriously affect its printing speed, and will also have a serious impact on the uniformity of the printing surface. .
因此,在保证打印连续的前提下,本实施例采用通过双目视觉检测喷头末端位置,计算路径点的位置误差后再通过移动平台补偿位置误差的方法来减小误差的方法克服上述问题。具体来说,补偿过程是:在XY平面内以约1mm的距离设置喷涂路径点,当机械臂末端达到某个路径点时发送信号给视觉***,视觉***采集末端位置并与预设位置进行比较,把补偿值反馈给移动平台,移动平台则根据补偿值进行来回移动(移动距离为一半偏差值)补偿机械臂末端偏差,达到误差补偿效果。误差补偿效果如图8所示。Therefore, under the premise of ensuring continuous printing, this embodiment uses binocular vision to detect the end position of the nozzle, calculate the position error of the path point, and then compensate the position error by moving the platform to reduce the error to overcome the above problem. Specifically, the compensation process is: set the spraying path point at a distance of about 1mm in the XY plane, and send a signal to the vision system when the end of the robotic arm reaches a certain path point, and the vision system collects the end position and compares it with the preset position , The compensation value is fed back to the mobile platform, and the mobile platform moves back and forth according to the compensation value (moving distance is half the deviation value) to compensate for the deviation of the end of the manipulator to achieve the effect of error compensation. The error compensation effect is shown in Figure 8.
通过打印实验发现,降低机械臂运行速度即打印速度也可以大大减小机械臂的抖动,从而提高打印精度。同时影响打印精度的另一个关键因素是材料的挤出速度,其通过控制气压的大小来调节,需要与打印速度协调控制。Through printing experiments, it is found that reducing the operating speed of the robotic arm, that is, the printing speed, can also greatly reduce the jitter of the robotic arm, thereby improving the printing accuracy. At the same time, another key factor that affects printing accuracy is the extrusion speed of the material, which is adjusted by controlling the size of the air pressure and needs to be controlled in coordination with the printing speed.
直接通过Z字型打印出的样本平面均匀度较差,特别是在拐角处有凸包现象。为了优化打印的平面均匀度,控制机械臂以混合型路径进行运动,运动轨迹如图9所示。混合路径(图9a)相比Z字型路径提高了边缘精度,但在边缘拐点处打印材料有堆积现象,因此在边缘路径增加了控制点(图9b),提高控制点后拐弯路径的运行速度以及减小材料挤出气压可以有效减小拐点的材料堆积问题。The samples printed directly through the zigzag pattern have poor plane uniformity, especially the convex hull phenomenon at the corners. In order to optimize the uniformity of the printing plane, the robot arm is controlled to move in a hybrid path, and the movement trajectory is shown in Figure 9. The hybrid path (Figure 9a) improves the edge accuracy compared to the zigzag path, but the printing material accumulates at the edge corners, so the edge path adds control points (Figure 9b) to improve the running speed of the turning path after the control points And reducing the material extrusion pressure can effectively reduce the material accumulation problem at the inflection point.
4.2 自由曲面打印4.2 Free-form surface printing
机械臂末端位置与姿态的控制是3D打印的核心关键技术,对于任意自由曲面,首先利用线激光扫描仪获取曲面上的点云数据,根据打印点间隔需求拟合生成控制机械臂打印的路径点;然后对其进行三角剖分,并计算每个三角面片顶点的法向量,以法向量为依据计算机械臂姿态控制参数,结合位置参数生成机械臂控制向量。The control of the end position and posture of the robotic arm is the core key technology of 3D printing. For any free-form surface, first use the line laser scanner to obtain the point cloud data on the surface, and generate the path points that control the printing of the robotic arm according to the printing point interval requirements. ; Then it is triangulated, and the normal vector of each triangle vertex is calculated. Based on the normal vector, the robot arm attitude control parameter is calculated, and the position parameter is combined to generate the robot arm control vector.
以图10a的曲面模型来说明自由曲面喷涂的数据处理过程。利用线激光传感器扫描得到的点云如图10b所示。可以看到扫描得到的点云在曲面高度方向出现断层,对于曲面喷涂精度有较大影响,因此需根据误差情况对曲面点云进行拟合。The surface model of Figure 10a is used to illustrate the data processing process of free-form surface spraying. The point cloud scanned by the line laser sensor is shown in Figure 10b. It can be seen that the point cloud obtained by scanning has a fault in the height direction of the surface, which has a greater impact on the accuracy of surface spraying. Therefore, it is necessary to fit the surface point cloud according to the error situation.
考虑到打印的间隔和精度等因素,设置传感器X轴点云的扫描间隔为0.3mm,Y轴的扫描间隔为1mm。同时为了简化重构算法,用二维点拟合代替三维曲面重建。先拟合点云模型X轴的横截面,再拟合Y轴横截面,拟合方法为最小二乘法。每组点拟合后合 并成新的三维模型。当模型曲面比较复杂时,各点集的拟合函数可以不同。这样可以保证最终生成的曲面模型具有更高的精度。以一组点集为例进行图形展示,采用2次函数拟合的结果最接近原始模型,如图11a所示,靛蓝色*表示原始点,蓝色点表示2次函数拟合结果,红色o点表示4次函数拟合结果。原始点云模型与拟合后的点云模型如图11b所示,拟合后消除了台阶效应误差。Taking into account the printing interval and accuracy and other factors, set the X-axis point cloud scanning interval of the sensor to 0.3mm, and the Y-axis scanning interval to 1mm. At the same time, in order to simplify the reconstruction algorithm, two-dimensional point fitting is used instead of three-dimensional surface reconstruction. Fit the X-axis cross-section of the point cloud model first, and then fit the Y-axis cross-section. The fitting method is the least square method. After fitting each group of points, they are merged into a new three-dimensional model. When the model surface is more complicated, the fitting function of each point set can be different. This can ensure that the final surface model has a higher accuracy. Take a set of points as an example for graphical display. The result of the second-order function fitting is the closest to the original model, as shown in Figure 11a. Indigo* represents the original point, the blue point represents the second-order function fitting result, and the red o The dots represent the 4th order function fitting result. The original point cloud model and the fitted point cloud model are shown in Figure 11b, and the step effect error is eliminated after fitting.
随后,需计算点云模型中每个点的法向量,作为机械臂喷涂到该点时的姿态控制参数。采用三角剖分法进行曲面重建,通过临近点计算向量并叉乘获取该点的法向量。计算结果如图12所示。Subsequently, the normal vector of each point in the point cloud model needs to be calculated as the attitude control parameter when the robot arm sprays to that point. The triangulation method is used to reconstruct the surface, and the vector is calculated from the adjacent points and cross-multiplied to obtain the normal vector of the point. The calculation result is shown in Figure 12.
得到每个点的法向量后,需根据法向量计算UR3机械臂的姿态控制参量。其计算过程为先根据空间法向量计算其对应的Euler角(roll,pitch,yaw),再转换为控制机械臂姿态的旋转向量Rx,Ry,Rz。结合路径点的控制位置x,y,z,即可获得机械臂的6维控制向量(x,y,z,Rx,Ry,Rz)。After obtaining the normal vector of each point, the attitude control parameters of the UR3 manipulator need to be calculated according to the normal vector. The calculation process is to first calculate the corresponding Euler angle (roll, pitch, yaw) according to the space normal vector, and then convert it into the rotation vector Rx, Ry, Rz that controls the posture of the robotic arm. Combining the control positions x, y, z of the path points, the 6-dimensional control vector (x, y, z, Rx, Ry, Rz) of the robotic arm can be obtained.
以空间法向量[1 2 3]为例说明姿态参数计算过程,如图13所示。当姿态参量为[0 0 0]时,UR3末端姿态为向量[0 0 1]。当以XYZ固定角坐标系描述欧拉角时,向量[1 2 3]的roll角为向量[0 2 3]与向量[0 0 1]的夹角0.588(以弧度表示),方向为负;其pitch角为向量[1 0 3]与向量[0 0 1]的夹角0.322,方向为正;而其yaw角为0。Take the space normal vector [1 2 3] as an example to illustrate the posture parameter calculation process, as shown in Figure 13. When the posture parameter is [0 0 0], the end posture of UR3 is the vector [0 0 1]. When the Euler angle is described in the XYZ fixed-angle coordinate system, the roll angle of the vector [1 2 3] is the angle 0.588 (expressed in radians) between the vector [0 2 3] and the vector [0 0 1], and the direction is negative; The pitch angle is 0.322 between the vector [1 0 3] and the vector [0 0 1], and the direction is positive; and the yaw angle is 0.
已知Euler角γ,β,α,则旋转矩阵为:Given Euler angles γ, β, α, the rotation matrix is:
Figure PCTCN2020090093-appb-000003
Figure PCTCN2020090093-appb-000003
根据旋转矩阵计算其θ角和k x,k y,k zCalculate the θ angle and k x , k y , k z according to the rotation matrix:
Figure PCTCN2020090093-appb-000004
Figure PCTCN2020090093-appb-000004
Figure PCTCN2020090093-appb-000005
Figure PCTCN2020090093-appb-000005
Figure PCTCN2020090093-appb-000006
Figure PCTCN2020090093-appb-000006
Figure PCTCN2020090093-appb-000007
Figure PCTCN2020090093-appb-000007
Figure PCTCN2020090093-appb-000008
Figure PCTCN2020090093-appb-000008
则其旋转向量为:Then its rotation vector is:
[Rx Ry Rz] T=[k xθ k yθ k zθ] T [Rx Ry Rz] T = [k x θ k y θ k z θ] T
对于空间法向量[1 2 3],其γ=0.588,β=0.2705,α=0,则:For the space normal vector [1 2 3], γ=0.588, β=0.2705, α=0, then:
[Rx Ry Rz] T=[0.5844 0.2626 -0.0795] T [Rx Ry Rz] T = [0.5844 0.2626 -0.0795] T
根据以上计算过程,即可获得自由曲面所有路径点的6维控制向量,从而实现打印控制。According to the above calculation process, the 6-dimensional control vector of all path points of the free-form surface can be obtained, thereby realizing printing control.
4.3 打印实验4.3 Print experiment
UR3可以通过自带的API库文件实现离线远程编程控制,控制平台包括C#、Python等。为了便于进行路径规划算法的测试工作,在VS平台用C#语言开发了UR3的控制软件,软件主要功能包括:UR3的XYZ平移控制,单轴旋转控制,当前位置显示,Z字型、螺旋型、圆环型路径打印控制,读取路径文件进行打印等。UR3 can realize offline remote programming control through its own API library file. The control platform includes C#, Python, etc. In order to facilitate the testing of the path planning algorithm, the UR3 control software was developed in C# language on the VS platform. The main functions of the software include: XYZ translation control of UR3, single-axis rotation control, current position display, Z-shaped, spiral, Ring-shaped path printing control, read path files for printing, etc.
设置平面正方形堆叠的打印路径,以Z字型路径打印的样品如图14所示。打印速度较快时打印路径并不是严格的直线(图14a),降低打印速度并且加入视觉反馈控制后打印精度得到改善(图14b),采用混合型路径并加入气压控制后打印样本如图14c所示。Set the printing path of flat and square stacks, and the sample printed with the zigzag path is shown in Figure 14. When the printing speed is faster, the printing path is not strictly a straight line (Figure 14a). After reducing the printing speed and adding visual feedback control, the printing accuracy is improved (Figure 14b). The mixed path and air pressure control are used to print samples as shown in Figure 14c. Show.
以图10的模型进行自由曲面涂层,采用Z字型路径,涂层效果如图4.13所示。从涂层效果可以看出,机械臂实现预期打印效果,但是打印精度上还有待提高。UR机械臂进行曲面涂层打印时,在运动速度1mm/s的情况下,仍会有轻微的抖动现象,而平面打印则不会。同时机械臂末端偏转角度越大,抖动也越厉害,这是影响打印精度的一个主要因素。影响打印精度的另外一个重要因素就是材料的挤出速度,当挤出速度较小时,打印的直线度较好,但是容易出现材料在针头堆积的情况,从而造成涂层时材料不均匀的情况,如图15a;而当材料挤出速度过快时,打印出的材料会出现锯齿状而不是直线,边缘也会有堆积现象,如图15b;需要通过实验找到机械臂运动速度与材料挤出速度的 对应关系以保证打印的精度,如图15c是较好的打印效果。此外,通过改变拟合参数从而优化拟合精度,使得拟合的点云与原模型越接近,能够进一步改善精度。The free-form surface coating is carried out with the model shown in Figure 10, and the Z-shaped path is adopted. The coating effect is shown in Figure 4.13. It can be seen from the coating effect that the robotic arm achieves the expected printing effect, but the printing accuracy needs to be improved. When the UR robotic arm performs curved surface coating printing, there will still be a slight jitter phenomenon at a movement speed of 1mm/s, while flat printing will not. At the same time, the greater the deflection angle of the end of the robotic arm, the greater the jitter, which is a major factor affecting the printing accuracy. Another important factor that affects the printing accuracy is the extrusion speed of the material. When the extrusion speed is low, the straightness of the printing is better, but it is easy to cause the material to accumulate in the needle, which will cause the material to be uneven in the coating. As shown in Figure 15a; when the extrusion speed of the material is too fast, the printed material will appear jagged instead of a straight line, and the edges will also be stacked, as shown in Figure 15b; it is necessary to find out the movement speed of the robot arm and the material extrusion speed through experiments The corresponding relationship between to ensure the accuracy of printing, as shown in Figure 15c is a better printing effect. In addition, by changing the fitting parameters to optimize the fitting accuracy, the closer the fitted point cloud is to the original model, the accuracy can be further improved.
实施例2Example 2
通过实施例1的装置和方法,对样件1的机械零件几何尺寸及形位公差检测(图16),结果如下:Through the device and method of Example 1, the geometric dimensions and shape tolerances of mechanical parts of sample 1 were tested (Figure 16), and the results are as follows:
Figure PCTCN2020090093-appb-000009
Figure PCTCN2020090093-appb-000009
结果表明,本发明基于预优化3D打印的反馈过程中的监测方法能有效提高打印的图像质量和机械臂打印末端喷头的位置精度,解决了因缺少反馈***和图像采集,导致3D打印过程易受外界环境对喷头末端物质的干扰问题,使打印喷头末端的位置信息能实时准确有效的检测出来,为反馈修正提供可靠的信息,实时纠正3D打印的轨迹,打印精度得到显著改善。The results show that the monitoring method of the present invention based on the feedback process of pre-optimized 3D printing can effectively improve the image quality of printing and the position accuracy of the nozzle at the end of the robot arm printing, and solves the problem of the lack of feedback system and image collection, which makes the 3D printing process vulnerable. The external environment interferes with the material at the end of the print head, so that the position information of the end of the print head can be accurately and effectively detected in real time, providing reliable information for feedback correction, real-time correction of the 3D printing trajectory, and the printing accuracy is significantly improved.
以上对本发明示例性的实施方式进行了说明。但是,本发明的保护范围不拘囿于上述实施方式。本领域技术人员在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The exemplary embodiments of the present invention have been described above. However, the protection scope of the present invention is not limited to the above-mentioned embodiments. Any modification, equivalent replacement, improvement, etc. made by those skilled in the art within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

  1. 一种基于多轴联动控制和机器视觉反馈的3D打印装置,包括:机械臂、喷头、相机、打印台,以及驱动装置和/或传动装置,其中所述机械臂为多轴机械臂,优选六轴机械臂;A 3D printing device based on multi-axis linkage control and machine vision feedback, including: a mechanical arm, a nozzle, a camera, a printing table, and a driving device and/or a transmission device, wherein the mechanical arm is a multi-axis mechanical arm, preferably six Axis manipulator
    优选地,所述相机优选设置四个以上;更优选地,所述相机设置在机械臂周围,例如其四周。Preferably, the cameras are preferably provided with more than four; more preferably, the cameras are provided around the robotic arm, for example, around it.
  2. 一种基于预优化的3D打印喷头末端实时跟踪定位方法,包括如下步骤:A real-time tracking and positioning method for the end of a 3D printing nozzle based on pre-optimization, including the following steps:
    1)对打印环境进行调整和优化;1) Adjust and optimize the printing environment;
    2)确定目标物体所在的初选ROI(Region of Interest)区域;2) Determine the primary ROI (Region of Interest) where the target object is located;
    3)对初选ROI区域进行优化处理。3) Optimize the primary selected ROI area.
  3. 如权利要求2所述的方法,其中:The method of claim 2, wherein:
    步骤1)中,通过颜色分割的预优化方法和/或快速多曝光融合方法对打印环境进行调整和优化;In step 1), the printing environment is adjusted and optimized through the pre-optimization method of color segmentation and/or the fast multi-exposure fusion method;
    步骤2)中,根据CNN(Convolutional Neural Network)模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域;In step 2), train samples of four cameras according to the CNN (Convolutional Neural Network) model, judge and recognize the target object, and mark the primary ROI area where the target object is located;
    步骤3)中,使用自适应边界限制的快速最小二乘法滤波方法,对图像进行保边平滑处理,从而对初选ROI区域进行优化处理;In step 3), the fast least squares filtering method with adaptive boundary limitation is used to perform edge-preserving and smoothing processing on the image, thereby optimizing the ROI region of the initial selection;
    优选地,在步骤1)之前,优选进行如下训练步骤中的至少一个步骤:Preferably, before step 1), at least one of the following training steps is preferably performed:
    a)输入三个以上的复杂打印模型;a) Enter more than three complex printing models;
    b)在不通气和不加入打印材料的情况下,使打印机运行;b) Run the printer without ventilation and without adding printing materials;
    c)通过相机采集打印喷头的视频和/或图像,直至打印结束;c) Collect the video and/or image of the print head through the camera until the end of printing;
    d)标记采集的视频和/或图像中喷头的区域,作为训练样本;d) Mark the area of the nozzle in the collected video and/or image as a training sample;
    e)在训练样本的基础上,构建CNN训练网络进行训练,获取训练结果;e) On the basis of training samples, construct a CNN training network for training and obtain training results;
    f)根据训练结果确定初选框。。f) Determine the primary selection box according to the training results. .
  4. 如权利要求2或3所述的方法,其中通过视觉的方法来实时监测3D打印喷头末端的跟踪定位,根据定位反馈并实时修正打印的算法。The method according to claim 2 or 3, wherein the tracking and positioning of the end of the 3D printing nozzle is monitored in real time by a visual method, and the printing algorithm is corrected in real time according to the positioning feedback.
  5. 如权利要求2-4任一项所述的方法,其中所述方法通过使用所述基于多轴联动控制和机器视觉反馈的3D打印装置实现。The method according to any one of claims 2-4, wherein the method is implemented by using the 3D printing device based on multi-axis linkage control and machine vision feedback.
  6. 如权利要求2-5任一项所述的方法,其中所述基于预优化的3D打印喷头末端实时跟踪定位方法进一步包括如下步骤:The method according to any one of claims 2-5, wherein the method for real-time tracking and positioning of the end of a 3D printing nozzle based on pre-optimization further comprises the following steps:
    i)向3D打印机输入需要打印的模型;i) Input the model to be printed into the 3D printer;
    ii)机械臂检测的算法开启,检测机械臂的倾斜方向,并启动悬挂在机械臂周围且正对于倾斜方向的两个相机,组成双目视觉***;ii) The algorithm for robotic arm detection is activated to detect the tilt direction of the robotic arm, and activate two cameras hanging around the robotic arm and facing the tilt direction to form a binocular vision system;
    iii)通过该双目视觉***,运用颜色分割的预优化方法和/或快速多曝光融合方法,调节图像的质量;iii) Through the binocular vision system, use the pre-optimization method of color segmentation and/or the fast multi-exposure fusion method to adjust the quality of the image;
    iv)根据CNN模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域;iv) Train samples of four cameras according to the CNN model, judge and recognize the target object, and mark the primary ROI area where the target object is located;
    v)在该初选ROI区域设定的情况下,通过相关滤波目标跟踪算法,对打印喷头进行目标跟踪;v) In the case of setting the primary ROI area, target tracking is performed on the print head through the relevant filtering target tracking algorithm;
    vi)通过跟踪框实时提取目标,对该区域的图像运用自适应边界限制的快速最小二乘法滤波处理,保边平滑该图像;vi) Extract the target in real time through the tracking frame, apply the fast least squares filtering process with adaptive boundary limit to the image of the region, and smooth the image with edge preservation;
    vii)根据处理后的特征,运用K-means算法进行分类,并获取喷头所在的类的图像,分割出目标;vii) According to the processed features, use the K-means algorithm to classify, and obtain the image of the category where the nozzle is located, and segment the target;
    viii)通过Canny算法获取打印末端的外轮廓,再运用Hough直线检测方法,对该边缘图像进行直线检测,计算交汇的中点位置;viii) Obtain the outer contour of the printing end through the Canny algorithm, and then use the Hough straight line detection method to detect the straight line of the edge image, and calculate the midpoint position of the intersection;
    优选地,所述颜色分割的预优化方法包括如下步骤:Preferably, the pre-optimization method for color segmentation includes the following steps:
    a)对输入的彩色图通过颜色转换函数,转到HSV颜色空间;a) The input color image is transferred to the HSV color space through the color conversion function;
    b)在H、S和V空间分别按照预定的阈值范围进行颜色阈值的比对,实验中打印的喷头颜色可以有多种,例如选自红色、紫色、绿色、青色和蓝色的五种;b) The color thresholds are compared in the H, S, and V spaces respectively according to the predetermined threshold range. The print head colors in the experiment can be multiple, for example, five selected from red, purple, green, cyan and blue;
    当颜色处于该预定的阈值范围内时候,选择为有效值;如果不在该范围,则去掉该值;When the color is within the predetermined threshold range, select the effective value; if it is not within the range, remove the value;
    c)对获取的图像进行平滑优化处理,此处选择中值滤波,去掉单点的噪声;c) Carry out smoothing and optimization processing on the acquired image, here choose the median filter to remove the single point of noise;
    d)对平滑处理后的图像进行轮廓提取,即绘制每个独立物体的外接矩形,通过长宽比和面积来去掉多余的无关矩形内的目标;d) Extract the contour of the smoothed image, that is, draw the circumscribed rectangle of each independent object, and remove the redundant target in the irrelevant rectangle through the aspect ratio and area;
    e)根据选择优化的结果,绘制选择后的图像,其中图像中只保留打印喷涂的图像。e) According to the result of the selection and optimization, draw the selected image, in which only the printed and sprayed image is retained in the image.
  7. 一种快速多曝光融合方法,包括:在相机运行的过程中,持续采集图像,并不断计算图像的平均亮度,如果低于设定的亮度值,就启动快速多曝光融合方法对图像进行优化。A fast multi-exposure fusion method includes: continuously collecting images while the camera is running, and continuously calculating the average brightness of the image, if it is lower than the set brightness value, start the fast multi-exposure fusion method to optimize the image.
  8. 如权利要求7所述的方法,其中所述快速多曝光融合方法包括如下步骤:8. The method according to claim 7, wherein the rapid multi-exposure fusion method comprises the following steps:
    a)把输入图像转化为灰度图,其中相邻帧图像运用gamma校正进行不同程度的初始校正,对这些进行高低通滤波;a) Convert the input image into a grayscale image, in which adjacent frame images are used for gamma correction for different degrees of initial correction, and high and low pass filtering are performed on these;
    b)获取这些图像中每个像素亮度最大的值,作为局部对比度权重;b) Obtain the maximum brightness value of each pixel in these images as the local contrast weight;
    c)运用判别方法,对这些灰度图像进行亮度判断,亮度阈值为30,在[30,255-30]之间认为是合理的亮度区间,该区间的设定为1,其余的设定为0,从而求取曝光权重图;c) Use the discrimination method to judge the brightness of these grayscale images. The brightness threshold is 30, which is considered a reasonable brightness interval between [30,255-30]. The interval is set to 1, and the rest are set to 0. In order to obtain the exposure weight map;
    d)对输入的图像进行直方图均衡化处理,再运用中值滤波方法,获取初始颜色权重图,之后再运用膨胀和腐蚀操作,求取最终的颜色权重图;d) Perform histogram equalization processing on the input image, and then use the median filter method to obtain the initial color weight map, and then use the expansion and corrosion operations to obtain the final color weight map;
    e)曝光权重图和颜色权重图相乘,再对二者结果进行归一化处理,归一化的结果再与局部对比度权重相乘,获得初始的融合权重,之后运用递归滤波方法对初始融合权重进行滤波,获得最终的融合权重;e) Multiply the exposure weight map and the color weight map, then normalize the two results, and then multiply the normalized result with the local contrast weight to obtain the initial fusion weight, and then use the recursive filtering method to perform the initial fusion The weight is filtered to obtain the final fusion weight;
    f)根据该融合权重,融合输入的图像,从而优化图像。f) According to the fusion weight, the input image is fused to optimize the image.
  9. 一种对打印喷头进行目标跟踪,特别是对运动中的打印喷头进行目标跟踪的方法,其包括使用相关滤波目标跟踪算法(KCF)进行跟踪;A method for target tracking of print nozzles, especially target tracking of print nozzles in motion, which includes the use of correlation filtering target tracking algorithm (KCF) for tracking;
    优选地,所述相关滤波目标跟踪算法中,首先对选定的ROI区域周围的多个区域提取Hog特征,再用循环矩阵进行求解下一帧选定的ROI区域;Preferably, in the correlation filtering target tracking algorithm, the Hog feature is first extracted from multiple areas around the selected ROI area, and then the circulant matrix is used to solve the selected ROI area in the next frame;
    优选地,在获得一个新的选定的ROI区域时,首先,我们对该区域的图像采用自适应边界限制的快速最小二乘法滤波处理,以有效保持物体的边缘不受破坏,且其余的非边缘区域得到平滑。Preferably, when obtaining a new selected ROI area, firstly, we adopt the fast least squares filtering process with adaptive boundary limit to the image of the area to effectively keep the edge of the object from being damaged, and the rest of the non- The edge area is smoothed.
  10. 如权利要求9所述的方法,其中边界限制通过容差机制自适应调节图像边界的区域,对图像进一步规整;9. The method according to claim 9, wherein the boundary limitation adaptively adjusts the area of the image boundary through a tolerance mechanism to further regularize the image;
    优选地,所述相关滤波目标跟踪算法的方法首先提出一种有效的替代方案,来寻求定义在加权L2范数(1)上的目标函数的解,包括将目标函数分解为每个空间维度,并使用1维快速求解方法求解矩阵;然后,将该方法扩展到更一般的情况,通过求解加权范数L r(0<r<2)上定义的目标函数或使用在现有EP滤波器,该滤波器中不能实现的聚集数据项。 Preferably, the method of the correlation filtering target tracking algorithm first proposes an effective alternative to find the solution of the objective function defined on the weighted L2 norm (1), including decomposing the objective function into each spatial dimension, And use the 1-dimensional fast solving method to solve the matrix; then, the method is extended to a more general case, by solving the objective function defined on the weighted norm L r (0<r<2) or using the existing EP filter, Aggregate data items that cannot be achieved in this filter.
PCT/CN2020/090093 2020-05-13 2020-05-13 3d printing device and method based on multi-axis linkage control and machine visual feedback measurement WO2021226891A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/090093 WO2021226891A1 (en) 2020-05-13 2020-05-13 3d printing device and method based on multi-axis linkage control and machine visual feedback measurement
PCT/CN2021/093520 WO2021228181A1 (en) 2020-05-13 2021-05-13 3d printing method and device
CN202110523612.9A CN113674299A (en) 2020-05-13 2021-05-13 3D printing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/090093 WO2021226891A1 (en) 2020-05-13 2020-05-13 3d printing device and method based on multi-axis linkage control and machine visual feedback measurement

Publications (1)

Publication Number Publication Date
WO2021226891A1 true WO2021226891A1 (en) 2021-11-18

Family

ID=78526164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090093 WO2021226891A1 (en) 2020-05-13 2020-05-13 3d printing device and method based on multi-axis linkage control and machine visual feedback measurement

Country Status (1)

Country Link
WO (1) WO2021226891A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113876453A (en) * 2021-12-08 2022-01-04 极限人工智能有限公司 Nest preparation method and device based on mechanical arm and surgical robot
CN113996757A (en) * 2021-12-06 2022-02-01 河北工业大学 Real-time sensing and intelligent monitoring system for 3D printing sand mold
CN114183183A (en) * 2021-11-22 2022-03-15 中煤科工集团沈阳研究院有限公司 Device and method for constructing underground coal mine sealing wall
CN114603849A (en) * 2022-04-14 2022-06-10 南京铖联激光科技有限公司 Novel scraper device for additive manufacturing and powder laying method
CN114606541A (en) * 2022-03-15 2022-06-10 南通大学 Two-dimensional structure micro-nano scale rapid printing system and method based on glass microprobe
CN114674391A (en) * 2022-03-03 2022-06-28 华中科技大学 Method for measuring initial volume of ink in pixel pit by ink-jet printing
CN115098961A (en) * 2022-06-16 2022-09-23 燕山大学 Degassing U-shaped flow channel optimization method based on flow throwing principle
CN115107270A (en) * 2022-05-25 2022-09-27 上海理工大学 Colored boundary droplet filling method and device for eliminating color 3D printing step effect
CN115254537A (en) * 2022-08-18 2022-11-01 浙江工业大学 Trajectory correction method of glue spraying robot
CN117021574A (en) * 2023-10-08 2023-11-10 哈尔滨理工大学 Magnetic-guided composite material controllable long-arc line path printing system and method
DE102022004677B3 (en) 2022-12-07 2024-02-01 Telegärtner Karl Gärtner GmbH PCB connector

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106264796A (en) * 2016-10-19 2017-01-04 泉州装备制造研究所 A kind of 3D print system based on multi-shaft interlocked control and machine vision metrology
CN206403893U (en) * 2016-10-19 2017-08-15 泉州装备制造研究所 A kind of 3D printing system based on multi-shaft interlocked control and machine vision metrology
CN107718544A (en) * 2017-10-29 2018-02-23 南京中高知识产权股份有限公司 3D printing device and its method of work with visual performance
CN108381916A (en) * 2018-02-06 2018-08-10 西安交通大学 A kind of compound 3D printing system and method for contactless identification defect pattern
CN108638497A (en) * 2018-04-28 2018-10-12 浙江大学 The comprehensive detecting system and method for a kind of 3D printer printer model outer surface
US20180307206A1 (en) * 2017-04-24 2018-10-25 Autodesk, Inc. Closed-loop robotic deposition of material
CN208035371U (en) * 2018-03-15 2018-11-02 杭州德迪智能科技有限公司 A kind of FDM three-dimensional printers with manipulator
CN109080144A (en) * 2018-07-10 2018-12-25 泉州装备制造研究所 3D printing spray head end real-time tracking localization method based on central point judgement
CN109177175A (en) * 2018-07-10 2019-01-11 泉州装备制造研究所 A kind of 3D printing spray head end real-time tracking localization method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106264796A (en) * 2016-10-19 2017-01-04 泉州装备制造研究所 A kind of 3D print system based on multi-shaft interlocked control and machine vision metrology
CN206403893U (en) * 2016-10-19 2017-08-15 泉州装备制造研究所 A kind of 3D printing system based on multi-shaft interlocked control and machine vision metrology
US20180307206A1 (en) * 2017-04-24 2018-10-25 Autodesk, Inc. Closed-loop robotic deposition of material
CN107718544A (en) * 2017-10-29 2018-02-23 南京中高知识产权股份有限公司 3D printing device and its method of work with visual performance
CN108381916A (en) * 2018-02-06 2018-08-10 西安交通大学 A kind of compound 3D printing system and method for contactless identification defect pattern
CN208035371U (en) * 2018-03-15 2018-11-02 杭州德迪智能科技有限公司 A kind of FDM three-dimensional printers with manipulator
CN108638497A (en) * 2018-04-28 2018-10-12 浙江大学 The comprehensive detecting system and method for a kind of 3D printer printer model outer surface
CN109080144A (en) * 2018-07-10 2018-12-25 泉州装备制造研究所 3D printing spray head end real-time tracking localization method based on central point judgement
CN109177175A (en) * 2018-07-10 2019-01-11 泉州装备制造研究所 A kind of 3D printing spray head end real-time tracking localization method

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114183183A (en) * 2021-11-22 2022-03-15 中煤科工集团沈阳研究院有限公司 Device and method for constructing underground coal mine sealing wall
CN113996757B (en) * 2021-12-06 2022-12-13 河北工业大学 Real-time sensing and intelligent monitoring system for 3D printing sand mold
CN113996757A (en) * 2021-12-06 2022-02-01 河北工业大学 Real-time sensing and intelligent monitoring system for 3D printing sand mold
CN113876453B (en) * 2021-12-08 2022-02-22 极限人工智能有限公司 Nest preparation method and device based on mechanical arm and surgical robot
CN113876453A (en) * 2021-12-08 2022-01-04 极限人工智能有限公司 Nest preparation method and device based on mechanical arm and surgical robot
CN114674391A (en) * 2022-03-03 2022-06-28 华中科技大学 Method for measuring initial volume of ink in pixel pit by ink-jet printing
CN114606541A (en) * 2022-03-15 2022-06-10 南通大学 Two-dimensional structure micro-nano scale rapid printing system and method based on glass microprobe
CN114606541B (en) * 2022-03-15 2023-03-24 南通大学 Two-dimensional structure micro-nano scale rapid printing system and method based on glass microprobe
CN114603849A (en) * 2022-04-14 2022-06-10 南京铖联激光科技有限公司 Novel scraper device for additive manufacturing and powder laying method
CN114603849B (en) * 2022-04-14 2024-01-26 南京铖联激光科技有限公司 Novel scraper device for additive manufacturing and powder spreading method
CN115107270A (en) * 2022-05-25 2022-09-27 上海理工大学 Colored boundary droplet filling method and device for eliminating color 3D printing step effect
CN115098961B (en) * 2022-06-16 2023-11-07 燕山大学 Degassing U-shaped runner optimization method based on throwing flow principle
CN115098961A (en) * 2022-06-16 2022-09-23 燕山大学 Degassing U-shaped flow channel optimization method based on flow throwing principle
CN115254537A (en) * 2022-08-18 2022-11-01 浙江工业大学 Trajectory correction method of glue spraying robot
CN115254537B (en) * 2022-08-18 2024-03-19 浙江工业大学 Track correction method of glue spraying robot
DE102022004677B3 (en) 2022-12-07 2024-02-01 Telegärtner Karl Gärtner GmbH PCB connector
CN117021574A (en) * 2023-10-08 2023-11-10 哈尔滨理工大学 Magnetic-guided composite material controllable long-arc line path printing system and method
CN117021574B (en) * 2023-10-08 2024-01-09 哈尔滨理工大学 Magnetic-guided composite material controllable long-arc line path printing system and method

Similar Documents

Publication Publication Date Title
WO2021226891A1 (en) 3d printing device and method based on multi-axis linkage control and machine visual feedback measurement
WO2021228181A1 (en) 3d printing method and device
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
Wang et al. A CNN-based adaptive surface monitoring system for fused deposition modeling
CN110497187B (en) Sun flower pattern assembly system based on visual guidance
CN107186708B (en) Hand-eye servo robot grabbing system and method based on deep learning image segmentation technology
CN106423656B (en) Automatic spraying system and method based on cloud and images match
JP6426143B2 (en) Controlled autonomous robot system and method for complex surface inspection and processing
CN111679291B (en) Inspection robot target positioning configuration method based on three-dimensional laser radar
CN111421539A (en) Industrial part intelligent identification and sorting system based on computer vision
CN110509300A (en) Stirrup processing feeding control system and control method based on 3D vision guidance
CN113276106B (en) Climbing robot space positioning method and space positioning system
CN101726498B (en) Intelligent detector and method of copper strip surface quality on basis of vision bionics
Kohn et al. Towards a real-time environment reconstruction for VR-based teleoperation through model segmentation
CN116673962B (en) Intelligent mechanical arm grabbing method and system based on Faster R-CNN and GRCNN
CN114037703B (en) Subway valve state detection method based on two-dimensional positioning and three-dimensional attitude calculation
Hsu et al. Development of a faster classification system for metal parts using machine vision under different lighting environments
CN110394422A (en) A kind of sand mold print procedure on-line monitoring device and method
CN108161930A (en) A kind of robot positioning system of view-based access control model and method
CN208092786U (en) A kind of the System of Sorting Components based on convolutional neural networks by depth
CN114879209A (en) System and method for low-cost foreign matter detection and classification of airport runway
CN116578035A (en) Rotor unmanned aerial vehicle autonomous landing control system based on digital twin technology
CN109079777B (en) Manipulator hand-eye coordination operation system
CN112634362B (en) Indoor wall plastering robot vision accurate positioning method based on line laser assistance
CN113885504A (en) Autonomous inspection method and system for train inspection robot and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20935643

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20935643

Country of ref document: EP

Kind code of ref document: A1