WO2021226891A1 - 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法 - Google Patents

基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法 Download PDF

Info

Publication number
WO2021226891A1
WO2021226891A1 PCT/CN2020/090093 CN2020090093W WO2021226891A1 WO 2021226891 A1 WO2021226891 A1 WO 2021226891A1 CN 2020090093 W CN2020090093 W CN 2020090093W WO 2021226891 A1 WO2021226891 A1 WO 2021226891A1
Authority
WO
WIPO (PCT)
Prior art keywords
printing
image
algorithm
robotic arm
color
Prior art date
Application number
PCT/CN2020/090093
Other languages
English (en)
French (fr)
Inventor
李俊
高银
谢银辉
唐康来
Original Assignee
中国科学院福建物质结构研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院福建物质结构研究所 filed Critical 中国科学院福建物质结构研究所
Priority to PCT/CN2020/090093 priority Critical patent/WO2021226891A1/zh
Priority to PCT/CN2021/093520 priority patent/WO2021228181A1/zh
Priority to CN202110523612.9A priority patent/CN113674299A/zh
Publication of WO2021226891A1 publication Critical patent/WO2021226891A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/20Apparatus for additive manufacturing; Details thereof or accessories therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • B29C64/393Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y30/00Apparatus for additive manufacturing; Details thereof or accessories therefor

Definitions

  • the invention relates to a 3D printing device and method based on multi-axis linkage control and machine vision feedback measurement, and belongs to the field of automation technology.
  • the present invention provides a 3D printing device based on multi-axis linkage control and machine vision feedback, including: a robot arm, a nozzle, a camera, a printing table, and a driving device and/or a transmission device, wherein the mechanical
  • the arm is a multi-axis robotic arm, preferably a six-axis robotic arm.
  • the cameras are preferably provided with more than four cameras, for example, a four-lens camera may be provided. More preferably, the camera is arranged around the robotic arm, for example, around it.
  • the present invention also provides a real-time tracking and positioning method for the end of a 3D printing nozzle based on pre-optimization, which includes the following steps:
  • step 1) the printing environment is adjusted and optimized through the pre-optimization method of color segmentation and/or the rapid multi-exposure fusion method.
  • step 2) four camera samples are trained according to the CNN (Convolutional Neural Network) model, the target object is judged and recognized, and the primary ROI area where the target object is located is marked.
  • CNN Convolutional Neural Network
  • step 3 a fast least squares filtering method with adaptive boundary limitation is used to perform edge-preserving and smoothing processing on the image, thereby performing optimization processing on the primary selected ROI region.
  • the tracking and positioning of the end of the 3D printing nozzle is monitored in real time through a visual method, and the printing algorithm is corrected in real time according to the positioning feedback.
  • the method is implemented by using the 3D printing device based on multi-axis linkage control and machine vision feedback.
  • the method for real-time tracking and positioning of the end of the 3D printing nozzle based on pre-optimization may further include the following steps:
  • the algorithm for robotic arm detection is activated to detect the tilt direction of the robotic arm, and activate two cameras hanging around the robotic arm and facing the tilt direction to form a binocular vision system;
  • the pre-optimization method of color segmentation and the fast multi-exposure fusion method are used to adjust the quality of the image;
  • target tracking is performed on the print head through the relevant filtering target tracking algorithm (that is, the tracking algorithm adopts the relevant filtering method);
  • the tilt direction of the camera with respect to the robot arm is determined by the posture information fed back from the end of the robot arm.
  • the method for real-time tracking and positioning of the end of a 3D printing nozzle based on pre-optimization has a process basically as shown in FIG. 4.
  • a pre-optimization method for color segmentation is also provided, and an exemplary process thereof is shown in FIG. 3.
  • the pre-optimization method for color segmentation includes the following steps:
  • the color thresholds are compared in the H, S, and V spaces respectively according to the predetermined threshold range.
  • the print head colors in the experiment can be multiple, for example, five selected from red, purple, green, cyan and blue;
  • the predetermined threshold range is as follows:
  • the fast multi-exposure fusion method includes: continuously collecting images during the operation of the camera, and continuously calculating the average brightness of the image, if it is lower than the set brightness value, start the fast multi-exposure fusion method to perform the image processing. optimization.
  • the rapid multi-exposure fusion method includes the following steps:
  • the brightness threshold is 30, which is considered a reasonable brightness interval between [30,255-30].
  • the interval is set to 1, and the rest are set to 0.
  • the exposure weight map In order to obtain the exposure weight map;
  • the input image is fused to optimize the image.
  • a method for target tracking of print nozzles which includes tracking using a correlation filtering target tracking algorithm (KCF).
  • KCF correlation filtering target tracking algorithm
  • the Hog feature is first extracted from multiple regions around the selected ROI area, and then the circulant matrix is used to solve the ROI area selected in the next frame.
  • the boundary limitation adaptively adjusts the area of the image boundary through a tolerance mechanism to further regularize the image.
  • the method of the related filtering target tracking algorithm first proposes an effective alternative to seek the solution of the objective function defined on the weighted L2 norm (1), including decomposing the objective function into each One space dimension, and use the 1-dimensional fast solving method to solve the matrix; then, the method is extended to a more general case, by solving the objective function defined on the weighted norm L r (0 ⁇ r ⁇ 2) or using the existing EP filter, aggregate data items that cannot be realized in this filter.
  • the K-means method is used for classification processing.
  • the selected ROI area can be divided into 3 categories to distinguish the end of the nozzle, the printing plate surface and the printing material.
  • the category of the print head is the second category of the K-means function classification process, so according to the present invention, only the second category is extracted, and the remaining categories are set to white.
  • the second classified image is obtained to improve the shielding of noise interference, and the Canny detection method can be used to effectively obtain the edge of the end of the print nozzle.
  • these edge points are used as the data points of the Hough line detection, and the Hough line detection is performed.
  • the threshold between the two straight lines is set to 10.
  • the number of fitting pieces in the Hough straight line detection process is set to 3 or less.
  • the position of the coordinate point and the real-time status of the printer being turned on are judged in real time.
  • the end point of the print nozzle is below the fitting straight line, and it is determined that the printer is still moving, it means that the printing is still working, and this position is the actual position obtained.
  • the end point of the print nozzle is located above or outside the fitting straight line, no matter whether the printer is working or not, it means that the printing process is approaching to stop. The detection should be stopped immediately and the position should be deleted.
  • the camera when the camera is switched so that the opposite camera cannot form a binocular to detect the end position during the movement of the print nozzle, the direction and position of the print nozzle are reacquired, and the two opposite directions are reselected.
  • the camera again uses the CNN model training method to obtain the initial ROI area, and then uses the tracking algorithm and detection algorithm to obtain the three-dimensional position of the nozzle in real time, feedback and adjusts the printing process until the end.
  • step 1) at least one of the following training steps is preferably performed:
  • Input more than three complex printing models (for example, surface transformations and models with more types, preferably including as many models as possible in the form of all surfaces in the printing process);
  • the collected video in step d), can be converted into an image, and the area of the nozzle in the image can be marked as a training sample.
  • the reason for the accuracy problem of the existing 3D printing method applied to artificial bone scaffold material is that in the three-coordinate printing method adopted, the input model at the beginning of printing may have deviations.
  • the input model at the beginning of printing may have deviations.
  • the computer since the end of the robot arm will stop retreating after printing, the end point is above the contour curve. Therefore, it is necessary to determine the position of the end point of the print head in real time to determine the actual point of the real print head.
  • the computer first receives the ideal position of the end of the robotic arm. Due to the influence of the size of the print head, the distance between the print head and the camera, and the quality of the captured image, it is impossible to set this point as the center, and the relevant area is set as the initial ROI area.
  • the method of deep learning is adopted in the present invention to obtain the initial target area.
  • a fast multi-exposure fusion method is proposed.
  • This method uses the advantages of multi-exposure fusion to effectively adjust the interference influence of uneven ambient light on the acquired image. It is of great help to improve the training accuracy of the image and the accuracy of the final ROI area.
  • the monitoring method of the present invention based on the feedback process of pre-optimized 3D printing can effectively improve the quality of the printed image and the position accuracy of the nozzle at the end of the robot arm printing, and solves the problem of the lack of feedback system and image collection, which makes the 3D printing process vulnerable to the outside world.
  • the environment's interference with the material at the end of the print head enables the position information of the end of the print head to be accurately and effectively detected in real time, provide reliable information for feedback correction, and correct the trajectory of 3D printing in real time.
  • the flexibility and efficiency of the method of the present invention enable a significant acceleration of a series of applications, which usually need to solve large linear systems.
  • Fig. 1 is a schematic diagram of the 3D printing device of the present invention.
  • Figure 2 is a flowchart of the algorithm for acquiring the initial ROI region.
  • Figure 3 is a flow chart of the pre-optimized algorithm for color segmentation.
  • Figure 4 shows the overall flow chart of the algorithm.
  • Figure 5 is a flow chart of a fast multi-exposure fusion method.
  • Figure 6 is a detection diagram of the tracking sub-process, where a: nozzle tracking position; b: K-means detection diagram; c: edge detection diagram; d: Hough straight line detection diagram; e: end point positioning diagram.
  • Figure 7 is a schematic diagram of finding the intersection of Hough lines.
  • Figure 8 is a schematic diagram of error compensation.
  • Figure 9 is a schematic diagram of a hybrid printing path.
  • Figure 10 is a schematic diagram of a free-form surface model and its point cloud.
  • Figure 11 is a schematic diagram of the cross-section and the overall point cloud fitting.
  • Figure 12 shows the point cloud triangulation and normal vector calculation results
  • Figure 13 is a schematic diagram of Euler angle calculation.
  • Figure 14 is a photo of a flat print sample.
  • Figure 15 is a photo of the surface coating effect.
  • Figure 16 shows the detection method of Example 2.
  • Robot Universal Robot UR3
  • Vision system Medvision industrial camera, AVT GX6600B, lens: BT-F036.
  • the 3D printing system mainly includes a six-axis robotic arm, a four-axis linkage printing platform, a print head, and a visual tracking positioning module.
  • the movement of the six-axis robotic arm is used to achieve spray printing on the surface of complex and fine objects, and the end of the print nozzle is tracked and positioned by a multi-eye camera, combined with the four-axis linkage printing platform for printing compensation movement, to achieve high precision on the surface of the bioprosthesis 3D printing.
  • the exterior of the 3D printing system uses aluminum alloy brackets, and the walls use relatively lightweight PC compression panels.
  • the six-axis robotic arm used in the system has six spatial degrees of freedom, high mobility, and can achieve precise positioning in complex curved spaces.
  • the print nozzle is installed at the end of the six-axis robotic arm.
  • the six-axis robotic arm controls the print Three-dimensional pattern printing of the body surface.
  • the discharge of the print head is controlled by an electronic pressure regulator to ensure the uniformity of the discharge.
  • the printing platform is a four-axis linkage platform with four degrees of freedom of movement, including linear movement in the three directions of X, Y, and Y and rotational movement in the Z direction, which can move at the same time.
  • the four-axis linkage platform consists of three linear modules and a high-precision turntable. Through the three-dimensional movement in space and the rotation in the Z-axis direction, the six-axis mechanical arm controls the movement and positioning of the print head, adjusts the printing position, and realizes complex curved surfaces. Patterned printing.
  • the main function of the vision hardware solution is to determine the three-dimensional space position of the needle, realize the measurement of the printing needle, the automatic position correction, and the center measurement of the needle tip and the needle mark. According to the functional requirements and accuracy requirements of the system, a detection scheme for the vision measurement system was designed.
  • the multi-view camera system to be used in this project the light source adopts the opposite side, double-row LED design, the light can be remotely adjusted according to needs.
  • the vision system is composed of two sets of binocular systems, which are arranged around each other, and the binocular system can be dynamically added depending on the actual situation.
  • the parallel binocular system measurement method has the advantages of high efficiency, appropriate accuracy, simple system structure, and low cost. It is very suitable for online, non-contact product inspection and quality control at the manufacturing site.
  • the parallel binocular system is a more effective measurement method. Since the robotic arm is blocked during the movement, two sets of vision systems are required to ensure that the probe can detect it. At the same time, multiple camera systems provide more data and can more accurately determine the position of the probe.
  • the robotic arm detection algorithm When the robotic arm detection algorithm is turned on, it detects the tilt direction of the robotic arm, and activates two cameras hanging around the robotic arm and facing the tilt direction to form a binocular vision system, where the camera is facing the tilt direction of the robotic arm by the robotic arm The posture information fed back at the end is determined.
  • the pre-optimization method of color segmentation and the fast multi-exposure fusion method are used to adjust the quality of the image.
  • the pre-optimization method for color segmentation includes the following steps:
  • the colors of the nozzles printed in the experiment are five kinds selected from red, purple, green, cyan and blue;
  • the predetermined threshold range is as follows:
  • the visual tracking processing algorithm for the end of the 3D printing nozzle of this project includes a tracking and positioning module, a nozzle extraction module, and a terminal 3D point detection module.
  • High-precision visual measurement and tracking is a technology that detects, recognizes and tracks specific moving targets based on the combination of machine vision and automation technology, and measures its three-dimensional coordinate information. Firstly, the best two are selected independently from the position information of the end of the robotic arm through the high-precision cameras set up all around to form binocular stereo vision; secondly, the target tracking model based on the combination of preset methods and related filtering methods is used to control the printer The end of the print head is effectively identified and tracked; finally, the Hough line detection method is used to extract the end point of the print head, and the position of the end point is calculated according to the principle of parallax.
  • the visual equipment After the visual equipment obtains the actual printed end point, it compares it with the actual three-dimensional point input into the computer model. If the error at this point is within a certain threshold, the four-axis linkage platform will not start; if it exceeds the error threshold, the corresponding compensation will be initiated according to the error level.
  • the acquisition of the end point of the 3D printing nozzle is mainly divided into the positioning method of the initial position of the target, the target tracking method, the target extraction algorithm and the end point extraction method.
  • the image is optimized by fast multi-exposure fusion method to adjust brightness.
  • the brightness threshold is 30, which is considered a reasonable brightness interval between [30,255-30].
  • the interval is set to 1, and the rest are set to 0.
  • the exposure weight map In order to obtain the exposure weight map;
  • the input image is fused to optimize the image.
  • the tracking and positioning of the end of the 3D printing nozzle is the most important step in the entire visual inspection. It is the prerequisite for other visual inspections.
  • the main function is to separate the printing nozzle from the entire field of view and become an independent processing unit for subsequent modules. Detection. In this step, first train samples of four cameras according to the CNN (Convolutional Neural Network) model, judge and recognize the target object, and mark the primary ROI area where the target object is located.
  • CNN Convolutional Neural Network
  • the main purpose of tracking the end of the print head is to effectively extract the three-dimensional information of the end point.
  • traditional tracking algorithms are difficult to meet the needs of the project, so we adopt a discriminant model and use classic correlation filtering tracking algorithms.
  • the correlation tracking filter is divided into three steps: First, the frame I t in the vicinity of the current position of the sample P t, a training regressor. This regressor can calculate the response of a small window sampling. Secondly, in the It +1 frame, samples are taken near the position P t of the previous frame, and the response of each sample is judged by the aforementioned regression. Finally, the sample with the strongest response is taken as the current frame position P t+1 .
  • the target object When the target object is being tracked, we can obtain the area where the target object is located in real time through the tracking frame, and then extract the target amount.
  • the target area is filtered by fast least squares method with adaptive boundary limitation to smooth the input target area. Since the image at the end of the print head has a relatively large contrast in the entire image, the classic K-Means (kmeans) algorithm is used for classification processing to extract the image of the largest class in the figure. After K-Means classification processing, the end of the nozzle is effectively segmented, as shown in Figure 6.
  • the three-dimensional coordinate points firstly perform grayscale processing on the acquired end target; secondly, threshold the grayscale image of the target object and perform smoothing processing; then, perform edge detection on the smoothed image; again, Hough Straight line detection to obtain the intersection point; finally, the three-dimensional coordinates of the end of the nozzle are obtained.
  • the printing platform control includes the following parts:
  • the movable printing platform with four-axis high-precision stepping and servo control can be used to compensate the printing position error in real time;
  • Control scheme design Control based on PLC programmable controller and visual error compensation system for real-time error information compensation control;
  • ⁇ Intelligent control of print head Adjust the print volume of the print head by controlling the pressure of the air pressure by analog quantity.
  • the control process is: After the four-axis linkage platform system is started, press reset, each axis of the platform will automatically find the origin, and reset the axes in sequence. After the axis reset is completed, press the start system to automatically wait for the compensation information of the visual error compensation system of the host computer. Position compensation, after the compensation is completed, the compensation information is obtained again for automatic compensation, and it runs back and forth in this way.
  • the electrical control system adopts PLC as the main controller according to requirements, and the control unit is two servo motors, two stepping motors, and an analog pressure valve.
  • the PLC and the robot adopt TCP communication control.
  • the robot reaches the designated printing position, the designated data is sent to the server through TCP, and the PLC obtains the designated air pressure information and the start signal to start or close the analog pressure valve.
  • the vision system performs real-time position detection, by calculating real-time position error information, data sorting and analysis, and then sent to the server, while the PLC obtains the data for position compensation, thereby improving printing accuracy.
  • the printing platform adopts a four-axis moving device for automatic reset and position compensation, and can start any axis for position compensation in real time.
  • PLC visual error system
  • robot system can access database information at the same time through TCP standard protocol, so as to realize multi-terminal data information sharing.
  • robotic arm system There are three subsystems in the current system: robotic arm system, vision system, and motion platform system. It is necessary to build a communication system to enable these three subsystems to communicate with each other to ensure the real-time and accuracy of the system's printing process.
  • the C/S architecture is used for communication.
  • the communication system is mainly based on .net remoting and TCP/IP protocols, and has written server-side programs and multiple client-side programs.
  • the client can send messages to the service program, and the service program can broadcast information to the client program. You can open multiple clients to communicate at the same time.
  • the vision subsystem and the robotic arm subsystem respectively send the detected position coordinates and the ideal position coordinates to the server; the server generates the coordinate difference after calculation and broadcasts it to each subsystem. Among them, the vision subsystem and the robotic arm subsystem ignore this message, and the motion platform system receives this message and adjusts the position.
  • the four cameras collect pictures through the capture card, and then call the needle point detection algorithm to extract the needle point coordinate position (including time information) of each picture. Then the coordinate positions of these needle tips are stored in the "collection point database”.
  • the robot system transmits the motion coordinates of the robot arm to the vision system through the Remoting communication protocol, and stores it in the motion trajectory data table.
  • the needle tip detection algorithm can also use the motion trajectory data table information.
  • the printing actuator is a UR3 robotic arm with a needle tube at the end, and inkjet printing is achieved by pressing and extruding the material during printing.
  • the printed material is located on the four-axis linkage platform, and the position identification of the printing end point is obtained by multi-eye stereo vision measurement.
  • the preprocessing of the print model is divided into two different processes.
  • the processing flow of model data is to plan a single-chip path after slicing, and finally form a complete printing path;
  • the processing flow of model data is to first find the path points on the surface. Then triangulate the path points to find the normal vector of each path point, and then calculate the attitude parameters of the control manipulator based on the normal vector, and finally combine the position of the path point to form the control file of the free-form surface spraying path.
  • this embodiment uses binocular vision to detect the end position of the nozzle, calculate the position error of the path point, and then compensate the position error by moving the platform to reduce the error to overcome the above problem.
  • the compensation process is: set the spraying path point at a distance of about 1mm in the XY plane, and send a signal to the vision system when the end of the robotic arm reaches a certain path point, and the vision system collects the end position and compares it with the preset position , The compensation value is fed back to the mobile platform, and the mobile platform moves back and forth according to the compensation value (moving distance is half the deviation value) to compensate for the deviation of the end of the manipulator to achieve the effect of error compensation.
  • the error compensation effect is shown in Figure 8.
  • the samples printed directly through the zigzag pattern have poor plane uniformity, especially the convex hull phenomenon at the corners.
  • the robot arm is controlled to move in a hybrid path, and the movement trajectory is shown in Figure 9.
  • the hybrid path (Figure 9a) improves the edge accuracy compared to the zigzag path, but the printing material accumulates at the edge corners, so the edge path adds control points ( Figure 9b) to improve the running speed of the turning path after the control points And reducing the material extrusion pressure can effectively reduce the material accumulation problem at the inflection point.
  • the control of the end position and posture of the robotic arm is the core key technology of 3D printing.
  • the surface model of Figure 10a is used to illustrate the data processing process of free-form surface spraying.
  • the point cloud scanned by the line laser sensor is shown in Figure 10b. It can be seen that the point cloud obtained by scanning has a fault in the height direction of the surface, which has a greater impact on the accuracy of surface spraying. Therefore, it is necessary to fit the surface point cloud according to the error situation.
  • the X-axis point cloud scanning interval of the sensor to 0.3mm, and the Y-axis scanning interval to 1mm.
  • two-dimensional point fitting is used instead of three-dimensional surface reconstruction. Fit the X-axis cross-section of the point cloud model first, and then fit the Y-axis cross-section.
  • the fitting method is the least square method. After fitting each group of points, they are merged into a new three-dimensional model. When the model surface is more complicated, the fitting function of each point set can be different. This can ensure that the final surface model has a higher accuracy. Take a set of points as an example for graphical display.
  • the result of the second-order function fitting is the closest to the original model, as shown in Figure 11a.
  • Indigo* represents the original point
  • the blue point represents the second-order function fitting result
  • the red o The dots represent the 4th order function fitting result.
  • the original point cloud model and the fitted point cloud model are shown in Figure 11b, and the step effect error is eliminated after fitting.
  • the normal vector of each point in the point cloud model needs to be calculated as the attitude control parameter when the robot arm sprays to that point.
  • the triangulation method is used to reconstruct the surface, and the vector is calculated from the adjacent points and cross-multiplied to obtain the normal vector of the point. The calculation result is shown in Figure 12.
  • the attitude control parameters of the UR3 manipulator need to be calculated according to the normal vector.
  • the calculation process is to first calculate the corresponding Euler angle (roll, pitch, yaw) according to the space normal vector, and then convert it into the rotation vector Rx, Ry, Rz that controls the posture of the robotic arm. Combining the control positions x, y, z of the path points, the 6-dimensional control vector (x, y, z, Rx, Ry, Rz) of the robotic arm can be obtained.
  • the posture parameter is [0 0 0]
  • the end posture of UR3 is the vector [0 0 1].
  • the roll angle of the vector [1 2 3] is the angle 0.588 (expressed in radians) between the vector [0 2 3] and the vector [0 0 1], and the direction is negative;
  • the pitch angle is 0.322 between the vector [1 0 3] and the vector [0 0 1], and the direction is positive; and the yaw angle is 0.
  • the 6-dimensional control vector of all path points of the free-form surface can be obtained, thereby realizing printing control.
  • UR3 can realize offline remote programming control through its own API library file.
  • the control platform includes C#, Python, etc.
  • the UR3 control software was developed in C# language on the VS platform.
  • the main functions of the software include: XYZ translation control of UR3, single-axis rotation control, current position display, Z-shaped, spiral, Ring-shaped path printing control, read path files for printing, etc.
  • the free-form surface coating is carried out with the model shown in Figure 10, and the Z-shaped path is adopted.
  • the coating effect is shown in Figure 4.13. It can be seen from the coating effect that the robotic arm achieves the expected printing effect, but the printing accuracy needs to be improved.
  • the UR robotic arm performs curved surface coating printing, there will still be a slight jitter phenomenon at a movement speed of 1mm/s, while flat printing will not.
  • the greater the deflection angle of the end of the robotic arm the greater the jitter, which is a major factor affecting the printing accuracy.
  • Another important factor that affects the printing accuracy is the extrusion speed of the material.
  • the external environment interferes with the material at the end of the print head, so that the position information of the end of the print head can be accurately and effectively detected in real time, providing reliable information for feedback correction, real-time correction of the 3D printing trajectory, and the printing accuracy is significantly improved.

Landscapes

  • Chemical & Material Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Materials Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于多轴联动控制和机器视觉反馈的3D打印装置,包括:机械臂、喷头、相机、打印台以及驱动装置和/或传动装置,机械臂为多轴机械臂,优选六轴机械臂;相机优选设置四个以上并设置在机械臂的四周。同时提供一种基于预优化的3D打印喷头末端实时跟踪定位方法,实现对人工骨高精度智能3D打印的实时定位和修正。首先通过CNN的方法获取目标初始识别框;其次运用一种新的快速多曝光融合方法预优化输入图像,提高图像质量和识别精度;之后运用相关滤波算法对目标进行跟踪;最后运用自适应边界限制快速最小二乘法滤波方法、Kmeans算法、膨胀腐蚀算法、Canny算法和Hough直线检测算法实现对喷头末端点立体定位。通过上述装置和方法形成的人工骨支架材料有望取代传统方法,避免患者二次创伤,实现支架个体化定制,为骨骼缺损者带来福音。

Description

基于多轴联动控制和机器视觉反馈测量的3D打印装置及方法 技术领域
本发明涉及一种基于多轴联动控制和机器视觉反馈测量的3D打印装置及方法,属于自动化技术领域。
背景技术
随着生活水平的日益提高,人们对医疗保健更加关注。然而因为一些疾病、交通事故等导致人体骨骼严重损伤(又称“骨缺损”),导致许多病人失去生活自理能力,给病人及其家庭都带来了严重的影响。当前,针对骨骼损伤问题,特别是大块骨缺损修复这一临床骨科难题,主要的解决方法是依靠自体组织移植、异体组织移植或使用替代材料修复等。然而,这些方法均存在较大弊端,如需两次手术、存在取骨量限制、可能传播疾病、成骨活性低下等。例如,自体骨是从患者身体的其他部位获取一些骨骼,然后用到需要的部位,但这种方式存在取骨量有限,并且无法获得满足需求的形状等问题。
组织工程学的发展为骨缺损修复提供了新思路,人工骨有望取代传统自体或同种异体骨,避免患者二次创伤,为此人工骨支架材料及制备也成为了研究热点。3D打印技术可以调控支架孔径、孔隙率、连通性以及比表面积,还可以实现支架个体化定制。然而,现有3D打印设备应用于人工骨支架材料制备时的精度不高,亟需进一步改进以提高人工骨支架材料的应用范围以及稳定性和安全性。
发明内容
为改善上述技术问题,本发明提供一种基于多轴联动控制和机器视觉反馈的3D打印装置,包括:机械臂、喷头、相机、打印台,以及驱动装置和/或传动装置,其中所述机械臂为多轴机械臂,优选六轴机械臂。
根据本发明的实施方案,所述相机优选设置四个以上,例如可以设置四目相机。更优选地,所述相机设置在机械臂周围,例如其四周。
本发明还提供一种基于预优化的3D打印喷头末端实时跟踪定位方法,包括如下步骤:
1)对打印环境进行调整和优化;
2)确定目标物体所在的初选ROI(Region of Interest)区域;
3)对初选ROI区域进行优化处理。
根据本发明的实施方案,步骤1)中,通过颜色分割的预优化方法和/或快速多曝光融合方法对打印环境进行调整和优化。
根据本发明的实施方案,步骤2)中,根据CNN(Convolutional Neural Network)模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域。
根据本发明的实施方案,步骤3)中,使用自适应边界限制的快速最小二乘法滤波方法,对图像进行保边平滑处理,从而对初选ROI区域进行优化处理。
根据本发明的实施方案,通过视觉的方法来实时监测3D打印喷头末端的跟踪定位,根据定位反馈并实时修正打印的算法。
根据本发明的实施方案,所述方法通过使用所述基于多轴联动控制和机器视觉反馈的3D打印装置实现。
根据本发明的实施方案,所述基于预优化的3D打印喷头末端实时跟踪定位方法可以进一步包括如下步骤:
i)向3D打印机输入需要打印的模型;
ii)机械臂检测的算法开启,检测机械臂的倾斜方向,并启动悬挂在机械臂周围且正对于倾斜方向的两个相机,组成双目视觉***;
iii)通过该双目视觉***,运用颜色分割的预优化方法和快速多曝光融合方法,调节图像的质量;
iv)根据CNN模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域;
v)在该初选ROI区域设定的情况下,通过相关滤波目标跟踪算法(即跟踪算法采用相关滤波方法),对打印喷头进行目标跟踪;
vi)通过跟踪框实时提取目标,对该区域的图像运用自适应边界限制的快速最小二乘法滤波处理,保边平滑该图像;
vii)根据处理后的特征,运用K-means算法进行分类,并获取喷头所在的类的图像,分割出目标;
viii)通过Canny算法获取打印末端的外轮廓,再运用Hough直线检测方法,对该边缘图像进行直线检测,计算交汇的中点位置。
根据本发明的实施方案,相机正对于机械臂的倾斜方向由机械臂末端反馈的姿态信息确定。
根据本发明示例性的实施方案,所述基于预优化的3D打印喷头末端实时跟踪定位方法具有基本上如图4所示的流程。
根据本发明的实施方案,还提供一种颜色分割的预优化方法,其示例性的流程如图3所示。所述颜色分割的预优化方法包括如下步骤:
a)对输入的彩色图通过颜色转换函数,转到HSV颜色空间;
b)在H、S和V空间分别按照预定的阈值范围进行颜色阈值的比对,实验中打印的喷头颜色可以有多种,例如选自红色、紫色、绿色、青色和蓝色的五种;
当颜色处于该预定的阈值范围内时候,选择为有效值;如果不在该范围,则去掉该值;
c)对获取的图像进行平滑优化处理,此处选择中值滤波,去掉单点的噪声;
d)对平滑处理后的图像进行轮廓提取,即绘制每个独立物体的外接矩形,通过长宽比和面积来去掉多余的无关矩形内的目标;
e)根据选择优化的结果,绘制选择后的图像,其中图像中只保留打印喷涂的图像。
优选地,如果打印喷头的图像暗淡,则启动另一种优化方法。
根据本发明的实施方案,所述预定的阈值范围如下:
Figure PCTCN2020090093-appb-000001
根据本发明的实施方案,还提供一种快速多曝光融合方法,其示例性的流程如图5所示。优选地,所述快速多曝光融合方法包括:在相机运行的过程中,持续采集图像,并不断计算图像的平均亮度,如果低于设定的亮度值,就启动快速多曝光融合方法对图像进行优化。
根据本发明的实施方案,所述快速多曝光融合方法包括如下步骤:
a)把输入图像转化为灰度图,其中相邻帧图像运用gamma校正进行不同程度的初始校正,对这些进行高低通滤波;
b)获取这些图像中每个像素亮度最大的值,作为局部对比度权重;
c)运用判别方法,对这些灰度图像进行亮度判断,亮度阈值为30,在[30,255-30]之间认为是合理的亮度区间,该区间的设定为1,其余的设定为0,从而求取曝光权重图;
d)对输入的图像进行直方图均衡化处理,再运用中值滤波方法,获取初始颜色权重图,之后再运用膨胀和腐蚀操作,求取最终的颜色权重图;
e)曝光权重图和颜色权重图相乘,再对二者结果进行归一化处理,归一化的结果再与局部对比度权重相乘,获得初始的融合权重,之后运用递归滤波方法对初始融合权重进行滤波,获得最终的融合权重;
f)根据该融合权重,融合输入的图像,从而优化图像。
根据本发明的实施方案,还提供一种对打印喷头进行目标跟踪,特别是对运动中的打印喷头进行目标跟踪的方法,其包括使用相关滤波目标跟踪算法(KCF)进行跟踪。
根据本发明的实施方案,所述相关滤波目标跟踪算法中,首先对选定的ROI区域周围的多个区域提取Hog特征,再用循环矩阵进行求解下一帧选定的ROI区域。
根据本发明的实施方案,在获得一个新的选定的ROI区域时,首先,我们对该区域的图像采用自适应边界限制的快速最小二乘法滤波处理,以有效保持物体的边缘不受破坏,且其余的非边缘区域得到平滑。
根据本发明的实施方案,边界限制通过容差机制自适应调节图像边界的区域,对图像进一步规整。
根据本发明的实施方案,所述相关滤波目标跟踪算法的方法首先提出一种有效的替代方案,来寻求定义在加权L2范数(1)上的目标函数的解,包括将目标函数分解为每个空间维度,并使用1维快速求解方法求解矩阵;然后,将该方法扩展到更一般的情况,通过求解加权范数L r(0<r<2)上定义的目标函数或使用在现有EP滤波器,该滤波器中不能实现的聚集数据项。
根据本发明的实施方案,运用K-means方法进行分类处理。优选地,可以将选定的ROI区域分成3类,以区别喷头末端、打印盘面和打印物质。更优选地,打印喷头的类别是K-means函数分类处理的第二类,故根据本发明只提取第二类,其余的分类设置为白色。
根据本发明的实施方案,获得第二个分类后的图像以改善屏蔽掉噪声干扰,并采用Canny检测的方法就能较为有效的获得打印喷头末端的边缘。
根据本发明的实施方案,对于提取的较完整的边缘图像,将这些边缘点作为Hough直线检测的数据点,进行Hough直线检测。
根据本发明的实施方案,优选地,HoughlinesP函数对直线的检测中,两直线之间的阈值设置为10。
根据本发明的实施方案,优选地,Hough直线检测过程中的拟合条数设定为3条以下。
根据本发明的实施方案,在求解获得三维坐标过程中,实时判断坐标点的位置与打印机开启的实时状况。当检测到打印喷头的末端点位于拟合直线的下方时,并且确定打印机还在 运动,则说明该打印还在工作,该位置即为实际求取的位置。当检测到打印喷头的末端点位于拟合直线的上方或者外侧时,不管打印机是否在工作,则说明该打印过程趋近于停止,该立即停止检测,删掉该位置。根据本发明的实施方案,当转换相机,使得打印喷头在运动过程中出现对面相机无法构成双目以检测出末端位置时,重新获取打印喷头的方向和位置,重新选择正对喷头方向的两个相机,再次运用CNN模型训练的方法获取初始ROI区域,之后采用所述跟踪算法和检测算法,实时获取喷头的三维位置,反馈调节打印过程,直至结束。
根据本发明的实施方案,在步骤1)之前,优选进行如下训练步骤中的至少一个步骤:
a)输入三个以上的复杂打印模型(例如曲面的变换和种类较多的模型,优选尽可能多地包含打印过程中的所有曲面形式的模型);
b)在不通气和不加入打印材料的情况下,使打印机运行;
c)通过相机采集打印喷头的视频和/或图像,直至打印结束;
d)标记采集的视频和/或图像中喷头的区域,作为训练样本;
e)在训练样本的基础上,构建CNN训练网络进行训练,获取训练结果;
f)根据训练结果确定初选框。
根据本发明的实施方案,步骤d)中,可以将采集的视频转化为图像,标记图像中喷头的区域,作为训练样本。
有益效果
发明人发现,现有3D打印方法应用于人工骨支架材料的精度问题的原因在于采用的三坐标打印方法中,打印开始时输入的模型可能存在偏差,在打印过程中由于缺乏图像预优化环节和实时监测矫正的步骤,导致设备精度大为降低,不能满足更高质量的需要。并且,由于机械臂末端在打印结束后会有停止回撤的时候,此时末端点在轮廓曲线的上方,因此,需要实时判断喷头末端点的位置,以确定真实的打印喷头的实际点。而且开启3D打印机后,计算机首先接收到机械臂末端的理想位置。由于打印喷头的尺寸、喷头与相机之间的距离的关系以及采集图像质量这三个方面的影响,无法以该点为中心,相关的区域设定为初始的ROI区域。
因此本发明中采用深度学习的方法,获取初始目标区域。在此过程中,考虑到外界环境光对3D打印图像采集的影响,提出了一种快速多曝光融合方法,该方法借助多曝光融合的优点,有效的调节不均匀环境光对采集图像的干扰影响,对提高图像的训练精度和最终获取ROI区域的准确度有较大的帮助。并且,本发明基于预优化3D打印的反馈过 程中的监测方法能有效提高打印的图像质量和机械臂打印末端喷头的位置精度,解决了因缺少反馈***和图像采集,导致3D打印过程易受外界环境对喷头末端物质的干扰问题,使打印喷头末端的位置信息能实时准确有效的检测出来,为反馈修正提供可靠的信息,实时纠正3D打印的轨迹。
此外,本发明方法的灵活性和效率实现了一系列应用程序的显着加速,这些应用程序通常需要解决大型线性***。
附图说明
图1为本发明3D打印装置的示意图。
图2为初始ROI区域获取算法流程图。
图3为颜色分割的预优化算法流程图。
图4为算法的总体流程图。
图5为快速多曝光融合方法流程图。
图6为跟踪子过程检测图,其中,a:喷头跟踪位置;b:K-means检测图;c:边缘检测图;d:Hough直线检测图;e:末端点定位图。
图7为求取Hough直线交点的示意图。
图8为误差补偿的示意图。
图9为混合型打印路径示意图。
图10为自由曲面模型及其点云的示意图。
图11为截面及整体点云拟合示意图。
图12为点云三角剖分及法向量计算结果
图13为Euler角计算示意图。
图14为平面打印样品照片。
图15为曲面涂层效果照片。
图16为实施例2的检测方法。
具体实施方式
下文将结合具体实施例对本发明的技术方案做更进一步的详细说明。应当理解,下列实施例仅为示例性地说明和解释本发明,而不应被解释为对本发明保护范围的限制。凡基于本发明上述内容所实现的技术均涵盖在本发明旨在保护的范围内。
除非另有说明,以下实施例中使用的原料和试剂均为市售商品,或者可以通过已知方法 制备。
下文仪器和原材料和装置的来源和规格如下:
机器人:优傲UR3机器人;
视觉***:迈德威视工业相机,AVT GX6600B,镜头:BT-F036。
实施例1
1.原材料的调配
a)首先用电子秤,称取羟基磷灰石粉末6g,倒入大烧杯中;
b)其次,用量筒量取28毫升水,进行混合;把混合的烧杯放入超声混合器中,对混合物进行超声混合。当混合物成了浆,停止混合气器;
c)取出装有混合物的烧杯,用电子秤称取4g海藻酸钠,倒入烧杯,进行再次混合。
d)将混合浆体用漏斗灌入打印喷头内备用。
2.硬件***的安装
如图1所示,3D打印***主要包括六轴机械臂、四轴联动打印平台、打印喷头及视觉跟踪定位模块等主要组成部分。利用六轴机械臂的运动实现复杂微细的物体表面喷涂打印,并通过多目相机对打印喷头末端进行跟踪定位,结合四轴联动打印平台进行打印补偿运动,实现在生物假体表面上的高精度3D打印。3D打印***外部采用铝合金支架,墙壁采用比较轻便的PC压缩板。
***采用的六轴机械臂具有六个空间自由度,运动灵活性高,能够实现复杂曲面空间的精确定位,打印喷头安装在六轴机械臂的末端,由六轴机械臂控制打印喷头在生物假体表面的三维图案化打印。打印喷头的出料采用电子调压阀进行稳压控制,保证出料的均匀性。
2.1.四轴联动平台设计
打印平台为四轴联动平台,具有四个运动自由度,包括X,Y,Y三个方向的直线运动及Z方向的转动运动,能够同时进行运动。四轴联动平台由三个直线模组及一个高精度转台组成,通过在空间的三维运动及Z轴方向的旋转,配合六轴机械臂控制喷头的运动定位,进行打印位置的调整,实现复杂曲面的图案化打印。
2.2.多目视觉的硬件设计
视觉硬件解决方案主要功能是用于确定针头的三维空间位置,实现打印针头测量、位置自动校正以及针尖与针痕的中心测量。根据***功能需求以及精度要求,设计出了 视觉测量***的检测方案。本项目将采用的多视角相机***,光源采用对边,双排LED设计,该灯可以根据需求遥控调节亮度。
该视觉***由两套双目***构成,分列于四周,视实际情况可以动态地增加双目***。平行双目***测量方法具有效率高、精度合适、***结构简单、成本低等优点,非常适合于制造现场的在线、非接触产品检测和质量控制。对运动物体(包括动物和人体形体)测量中,由于图像获取是在瞬间完成的,因此平行双目***是一种更有效的测量方法。由于机械臂在运动过程中存在遮挡的情况,故需要两套视觉***用以保证探针能够检测到。同时多个相机***提供了更多的数据,能够更为精确地确定探针的位置。
当机械臂检测的算法开启,检测机械臂的倾斜方向,并启动悬挂在机械臂周围且正对于倾斜方向的两个相机,组成双目视觉***,其中相机正对于机械臂的倾斜方向由机械臂末端反馈的姿态信息确定。
通过该双目视觉***,运用颜色分割的预优化方法和快速多曝光融合方法,调节图像的质量。其中,所述颜色分割的预优化方法包括如下步骤:
f)对输入的彩色图通过颜色转换函数,转到HSV颜色空间;
g)在H、S和V空间分别按照预定的阈值范围进行颜色阈值的比对,实验中打印的喷头颜色为选自红色、紫色、绿色、青色和蓝色的五种;
当颜色处于该预定的阈值范围内时候,选择为有效值;如果不在该范围,则去掉该值;
h)对获取的图像进行平滑优化处理,此处选择中值滤波,去掉单点的噪声;
i)对平滑处理后的图像进行轮廓提取,即绘制每个独立物体的外接矩形,通过长宽比和面积来去掉多余的无关矩形内的目标;
j)根据选择优化的结果,绘制选择后的图像,其中图中只剩下打印喷涂的图像。
如果打印喷头的图像暗淡,则启动另一种优化方法。
所述预定的阈值范围如下:
Figure PCTCN2020090093-appb-000002
3.打印机的控制与调节
本项目3D打印喷头末端视觉跟踪处理算法包括跟踪定位模块、喷头提取模块以及末端 三维点检测模块。
高精度视觉测量跟踪是在结合机器视觉与自动化技术的基础上对特定运动目标进行检测识别与跟踪,并测得其三维坐标信息的技术。首先,通过架设在四周的高精度相机由机械臂末端的位置信息自主选择最优的2个,组建双目立体视觉;其次,采用基于预设方法和相关滤波方法相结合的目标跟踪模型对打印机喷头末端进行有效的识别和跟踪;最后,采用霍夫直线检测方法对打印喷头的末端点进行提取,根据视差原理,计算出末端点的位置。
视觉设备获取实际打印的末端点后,与输入计算机模型的实际三维点进行比对。如果该点的误差在一定的阈值范围内,四轴联动平台就不启动;如果超过误差的阈值,则根据误差的级别,启动相应的补偿。
3.1 末端点的获取过程
3D打印喷头末端点的获取主要分为目标初始位置的定位方法,目标跟踪方法,目标提取算法和末端点提取方法。首先开启光源,计算机和视觉设备;其次,根据目前的视觉设备采集的图像判断环境的亮度,如果亮度在我们设置的阈值范围内,则不启动快速多曝光融合方法;如果超过阈值,则对输入的图像采用快速多曝光融合方法进行优化,调节亮度。
其中,快速多曝光融合方法的流程如图5所示,包括如下步骤:
a)把输入图像转化为灰度图,其中相邻帧图像运用gamma校正进行不同程度的初始校正,对这些进行高低通滤波;
b)获取这些图像中每个像素亮度最大的值,作为局部对比度权重;
c)运用判别方法,对这些灰度图像进行亮度判断,亮度阈值为30,在[30,255-30]之间认为是合理的亮度区间,该区间的设定为1,其余的设定为0,从而求取曝光权重图;
d)对输入的图像进行直方图均衡化处理,再运用中值滤波方法,获取初始颜色权重图,之后再运用膨胀和腐蚀操作,求取最终的颜色权重图;
e)曝光权重图和颜色权重图相乘,再对二者结果进行归一化处理,归一化的结果再与局部对比度权重相乘,获得初始的融合权重,之后运用递归滤波方法对初始融合权重进行滤波,获得最终的融合权重;
f)根据该融合权重,融合输入的图像,从而优化图像。
3.1.1 目标初始位置定位
3D打印喷头末端的跟踪定位是整个视觉检测最为重要的步骤,是进行其他视觉检测的前提,主要功能是将打印喷头从整个视场中分离出来,成为一个独立的处理单元,进 行后续的模块的检测。该步骤中,首先根据CNN(Convolutional Neural Network)模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域。
3.1.2 目标跟踪定位方法
打印喷头末端的跟踪主要目的是有效的提取末端点的三维信息。针对喷头颜色种类比较多,变化比较大,传统的跟踪算法很难满足项目的需要,因此我们采用判别式模型,运用经典的相关滤波类跟踪算法。在本项目中,相关滤波跟踪主要分为三步:首先.在I t帧中,在当前位置P t附近采样,训练一个回归器。这个回归器能计算一个小窗口采样的响应。其次,在I t+1帧中,在前一帧位置P t附近采样,用前述回归器判断每个采样的响应。最后,响应最强的采样作为本帧位置P t+1
3.1.3 打印喷头的目标提取算法
当目标物体处于被跟踪状态,我们就可以通过跟踪框实时获取目标物体所在的区域,进而对目标额提取。本项目中,针对打印喷头末端的跟踪,首先根据跟踪框,提取跟踪框在双目视觉中的位置信息作为初始位置点;其次对跟踪框内的图像进行预处理。首先对目标区域采用自适应边界限制的快速最小二乘法滤波处理,平滑输入的目标区域。由于打印喷头末端的图像在整个图像中对比度比较大,因此采用经典的K-Means(kmeans)算法进行分类处理,提取图中最大类的那块图像。经过K-Means分类处理,喷头末端被有效地分割出来,如图6所示。
3.1.4 末端三维点的求取算法
三维坐标点的求取,首先对获取的末端目标进行灰度化处理;其次,阈值化处理目标物体的灰度图,并进行平滑处理;之后,对平滑后的图像进行边缘检测;再次,Hough直线检测,求取交汇点;最后,喷头末端三维坐标的获取。
由于边缘不是很光滑,检测的直线的条数较多,但研究发现HoughLinesP数对直线的检测中,当累计的阈值设置为30,两直线之间的阈值设置为10时,能较准确的提取喷头两边的直线。在Hough直线检测过程中我们把拟合条数最多设定为3条,其中存在直线拟合条数不一致的问题,主要表现在以下方面:所有的拟合直线在同一侧,有2条在同一侧,有一条处于垂直方向。当所有的拟合直线在同一侧,我们取实际打印的值与打印喷头外轮廓最低点的位置点的平均值作为交点。当有2条在同一侧,我们取最接近外轮廓的那条拟合直线与另外一边的拟合直线,进行交汇,求取交点。当有一条处于垂直方向,我们取另外一边没有垂直的拟合直线,用该直线与垂直直线的交点作为交汇点。这些交汇点即为我们要跟踪定位的位置。
如图7所示,喷头末端的Hough直线条数存在不确定性,一般情况下,会出现2条(图 7a),我们设定交点为喷头末端的坐标点。但也有例外的情况,会出现>=3条的情况(图7b)。
针对此种情况,我们首先确定喷头的外轮廓最低点位置,并与机械臂末端的方向进行比对,如果机械臂末端方向朝上,也就意味着打印结束,该点我们设置为无效点;如果机械臂末端方向依旧朝下,可以判断该机械臂依旧工作。判断方法位:取直线的斜率,在同一侧的直线,取斜率绝对值比较小的那个,交点就是我们所要求的点。如果直线的斜率都趋于无穷大(图7c),判断的方法为:交点为内侧的直线与另一侧直线交点。
求取交点后,利用双目视觉的视差原理,获取喷头末端的三维坐标点。
3.2 机器人的控制
3.2.1 控制部分
打印平台控制包括以下几个部分:
●平台设计。采用四轴高精度步进、伺服控制的可移动式打印平台,可以实时进行打印位置误差补偿;
●控制方案设计。控制基于PLC可编程控制器与视觉误差补偿***进行实时误差信息补偿控制;
●通过PLC基于TCP通信实现机器人、视觉的控制:PLC与机器人、视觉***进行相互的数据交换与控制;
●打印喷头智能控制。通过模拟量控制气压的压力对打印喷头进行打印量的调节。
控制流程为:四轴联动平台***启动后,按下复位,平台各个轴自动寻找原点,进行顺序复位轴,待轴复位完成后,按下启动***自动等待上位机视觉误差补偿***的补偿信息进行位置补偿,待补偿完成,再次获取补偿信息进行自动补偿,如此往复运行。
电气控制***根据需求采用PLC作为主控制器,控制单元为两个伺服电机、两个步进电机、一个模拟量气压阀。PLC与机器人采用TCP通信控制,机器人到达指定打印位置时通过TCP发送指定数据到服务器,PLC获取指定气压信息及启动信号启动或关闭模拟量气压阀。视觉***进行实时的位置检测,通过计算实时的位置误差信息,进行数据整理分析,再发往服务器,同时PLC获取数据进行位置补偿,从而提高打印精度。
该控制***的特点主要有以下两点:
●多轴联动设计。打印平台采用四轴移动装置,进行自动复位及位置补偿,可以实时启动任意的一个轴进行位置补偿。
●多端数据交互。PLC与视觉误差***、机器人***通过TCP标准的协议可以同时进行访问数据库信息,从而实现多端的数据信息共享。
3.2.2 通讯模块设计
当前***存在三个子***:机械臂***、视觉***、运动平台***。需要构建一个通信***使得这三个子***相互通信,保证***打印过程中的实时性及准确性。
基于使***通信更加有序及可控,同时让***更具扩展性的目的,采用C/S架构进行通信。该通信***主要基于.net remoting以及TCP/IP协议,编写了服务端程序及多个客户端程序。客户端可以发送消息给服务程序,服务程序可以广播信息给客户端程序。可以同时打开多个客户端进行通信。视觉子***、机械臂子***分别发送检测到的位置坐标及理想的位置坐标给服务端;服务端计算后生成坐标差值,广播给各个子***。其中,视觉子***及机械臂子***忽略这条消息,运动平台***接收这条消息,并进行位置调整。
***的整体打印通信流程如下所示:
(1)首先,四个相机通过采集卡采集图片,再调用针尖检测算法提取各个图片的针尖坐标位置(包含时间信息)。然后将这些针尖坐标位置存储到“采集点数据库”。
(2)机器人***通过Remoting通信协议,将机械臂的运动坐标传送到视觉***,并存储到运动轨迹数据表中。针尖检测算法也可以使用运动轨迹数据表信息。
(3)设定一个定时器,每隔一个时间段提取采集点数据库中的数据并调用相机标定程序计算出相机观察到的坐标点。随后与运动轨迹数据表中的数据进行对比生成PLC运动控制参数,通过Remoting通信协议传送到PLC运动***中。
(4)重复(1)(2)(3)直至打印结束。
4. 3D打印实验
打印执行机构为末端加装针管的UR3机械臂,打印时通过加压挤出材料实现喷墨打印。打印的材料位于四轴联动平台上,打印末端点的位置识别由多目立体视觉测量得到。
根据不同的3D打印要求,打印模型预处理分为两种不同的流程。对于堆积成型3D打印,模型数据的处理流程是切片后进行单片路径规划,最终形成完整的打印路径;对于自由曲面涂层3D打印,模型数据的处理流程是先求出曲面上的路径点,然后对路径点进行三角剖分,求出每个路径点的法向量,再根据法向量计算控制机械臂的姿态参数,最后结合路径点的位置形成自由曲面喷涂路径的控制文件。
4.1 平面堆积成型打印
对于平面内堆积成型打印,首先把打印模型规划的路径点转换到UR3的世界坐标下,同时固定喷头姿态、设置速度和加速度,以直线移动控制UR3(MoveL)实现平面打印。
进行实际打印测试时,为了保证打印的连续性与行进速度,一般将同一直线上相同的点 进行简化处理。但是打印过程中发现UR3机械臂行进速度较快时会有抖动现象,造成直线运动时实际路径并不是直线,而是类似正弦波的无规则曲线,使得打印精度较低。而机械臂本身通过位置控制其行进路径,在未到达指定位置时无法反馈控制其位置,因此误差难以消除。而达到指定位置发现有误差,需要再发送一次指令控制其运动到原先的指定位置,这样的控制流程将造成打印过程不连续,严重影响其打印速度,也将对打印表面的均匀度产生严重影响。
因此,在保证打印连续的前提下,本实施例采用通过双目视觉检测喷头末端位置,计算路径点的位置误差后再通过移动平台补偿位置误差的方法来减小误差的方法克服上述问题。具体来说,补偿过程是:在XY平面内以约1mm的距离设置喷涂路径点,当机械臂末端达到某个路径点时发送信号给视觉***,视觉***采集末端位置并与预设位置进行比较,把补偿值反馈给移动平台,移动平台则根据补偿值进行来回移动(移动距离为一半偏差值)补偿机械臂末端偏差,达到误差补偿效果。误差补偿效果如图8所示。
通过打印实验发现,降低机械臂运行速度即打印速度也可以大大减小机械臂的抖动,从而提高打印精度。同时影响打印精度的另一个关键因素是材料的挤出速度,其通过控制气压的大小来调节,需要与打印速度协调控制。
直接通过Z字型打印出的样本平面均匀度较差,特别是在拐角处有凸包现象。为了优化打印的平面均匀度,控制机械臂以混合型路径进行运动,运动轨迹如图9所示。混合路径(图9a)相比Z字型路径提高了边缘精度,但在边缘拐点处打印材料有堆积现象,因此在边缘路径增加了控制点(图9b),提高控制点后拐弯路径的运行速度以及减小材料挤出气压可以有效减小拐点的材料堆积问题。
4.2 自由曲面打印
机械臂末端位置与姿态的控制是3D打印的核心关键技术,对于任意自由曲面,首先利用线激光扫描仪获取曲面上的点云数据,根据打印点间隔需求拟合生成控制机械臂打印的路径点;然后对其进行三角剖分,并计算每个三角面片顶点的法向量,以法向量为依据计算机械臂姿态控制参数,结合位置参数生成机械臂控制向量。
以图10a的曲面模型来说明自由曲面喷涂的数据处理过程。利用线激光传感器扫描得到的点云如图10b所示。可以看到扫描得到的点云在曲面高度方向出现断层,对于曲面喷涂精度有较大影响,因此需根据误差情况对曲面点云进行拟合。
考虑到打印的间隔和精度等因素,设置传感器X轴点云的扫描间隔为0.3mm,Y轴的扫描间隔为1mm。同时为了简化重构算法,用二维点拟合代替三维曲面重建。先拟合点云模型X轴的横截面,再拟合Y轴横截面,拟合方法为最小二乘法。每组点拟合后合 并成新的三维模型。当模型曲面比较复杂时,各点集的拟合函数可以不同。这样可以保证最终生成的曲面模型具有更高的精度。以一组点集为例进行图形展示,采用2次函数拟合的结果最接近原始模型,如图11a所示,靛蓝色*表示原始点,蓝色点表示2次函数拟合结果,红色o点表示4次函数拟合结果。原始点云模型与拟合后的点云模型如图11b所示,拟合后消除了台阶效应误差。
随后,需计算点云模型中每个点的法向量,作为机械臂喷涂到该点时的姿态控制参数。采用三角剖分法进行曲面重建,通过临近点计算向量并叉乘获取该点的法向量。计算结果如图12所示。
得到每个点的法向量后,需根据法向量计算UR3机械臂的姿态控制参量。其计算过程为先根据空间法向量计算其对应的Euler角(roll,pitch,yaw),再转换为控制机械臂姿态的旋转向量Rx,Ry,Rz。结合路径点的控制位置x,y,z,即可获得机械臂的6维控制向量(x,y,z,Rx,Ry,Rz)。
以空间法向量[1 2 3]为例说明姿态参数计算过程,如图13所示。当姿态参量为[0 0 0]时,UR3末端姿态为向量[0 0 1]。当以XYZ固定角坐标系描述欧拉角时,向量[1 2 3]的roll角为向量[0 2 3]与向量[0 0 1]的夹角0.588(以弧度表示),方向为负;其pitch角为向量[1 0 3]与向量[0 0 1]的夹角0.322,方向为正;而其yaw角为0。
已知Euler角γ,β,α,则旋转矩阵为:
Figure PCTCN2020090093-appb-000003
根据旋转矩阵计算其θ角和k x,k y,k z
Figure PCTCN2020090093-appb-000004
Figure PCTCN2020090093-appb-000005
Figure PCTCN2020090093-appb-000006
Figure PCTCN2020090093-appb-000007
Figure PCTCN2020090093-appb-000008
则其旋转向量为:
[Rx Ry Rz] T=[k xθ k yθ k zθ] T
对于空间法向量[1 2 3],其γ=0.588,β=0.2705,α=0,则:
[Rx Ry Rz] T=[0.5844 0.2626 -0.0795] T
根据以上计算过程,即可获得自由曲面所有路径点的6维控制向量,从而实现打印控制。
4.3 打印实验
UR3可以通过自带的API库文件实现离线远程编程控制,控制平台包括C#、Python等。为了便于进行路径规划算法的测试工作,在VS平台用C#语言开发了UR3的控制软件,软件主要功能包括:UR3的XYZ平移控制,单轴旋转控制,当前位置显示,Z字型、螺旋型、圆环型路径打印控制,读取路径文件进行打印等。
设置平面正方形堆叠的打印路径,以Z字型路径打印的样品如图14所示。打印速度较快时打印路径并不是严格的直线(图14a),降低打印速度并且加入视觉反馈控制后打印精度得到改善(图14b),采用混合型路径并加入气压控制后打印样本如图14c所示。
以图10的模型进行自由曲面涂层,采用Z字型路径,涂层效果如图4.13所示。从涂层效果可以看出,机械臂实现预期打印效果,但是打印精度上还有待提高。UR机械臂进行曲面涂层打印时,在运动速度1mm/s的情况下,仍会有轻微的抖动现象,而平面打印则不会。同时机械臂末端偏转角度越大,抖动也越厉害,这是影响打印精度的一个主要因素。影响打印精度的另外一个重要因素就是材料的挤出速度,当挤出速度较小时,打印的直线度较好,但是容易出现材料在针头堆积的情况,从而造成涂层时材料不均匀的情况,如图15a;而当材料挤出速度过快时,打印出的材料会出现锯齿状而不是直线,边缘也会有堆积现象,如图15b;需要通过实验找到机械臂运动速度与材料挤出速度的 对应关系以保证打印的精度,如图15c是较好的打印效果。此外,通过改变拟合参数从而优化拟合精度,使得拟合的点云与原模型越接近,能够进一步改善精度。
实施例2
通过实施例1的装置和方法,对样件1的机械零件几何尺寸及形位公差检测(图16),结果如下:
Figure PCTCN2020090093-appb-000009
结果表明,本发明基于预优化3D打印的反馈过程中的监测方法能有效提高打印的图像质量和机械臂打印末端喷头的位置精度,解决了因缺少反馈***和图像采集,导致3D打印过程易受外界环境对喷头末端物质的干扰问题,使打印喷头末端的位置信息能实时准确有效的检测出来,为反馈修正提供可靠的信息,实时纠正3D打印的轨迹,打印精度得到显著改善。
以上对本发明示例性的实施方式进行了说明。但是,本发明的保护范围不拘囿于上述实施方式。本领域技术人员在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于多轴联动控制和机器视觉反馈的3D打印装置,包括:机械臂、喷头、相机、打印台,以及驱动装置和/或传动装置,其中所述机械臂为多轴机械臂,优选六轴机械臂;
    优选地,所述相机优选设置四个以上;更优选地,所述相机设置在机械臂周围,例如其四周。
  2. 一种基于预优化的3D打印喷头末端实时跟踪定位方法,包括如下步骤:
    1)对打印环境进行调整和优化;
    2)确定目标物体所在的初选ROI(Region of Interest)区域;
    3)对初选ROI区域进行优化处理。
  3. 如权利要求2所述的方法,其中:
    步骤1)中,通过颜色分割的预优化方法和/或快速多曝光融合方法对打印环境进行调整和优化;
    步骤2)中,根据CNN(Convolutional Neural Network)模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域;
    步骤3)中,使用自适应边界限制的快速最小二乘法滤波方法,对图像进行保边平滑处理,从而对初选ROI区域进行优化处理;
    优选地,在步骤1)之前,优选进行如下训练步骤中的至少一个步骤:
    a)输入三个以上的复杂打印模型;
    b)在不通气和不加入打印材料的情况下,使打印机运行;
    c)通过相机采集打印喷头的视频和/或图像,直至打印结束;
    d)标记采集的视频和/或图像中喷头的区域,作为训练样本;
    e)在训练样本的基础上,构建CNN训练网络进行训练,获取训练结果;
    f)根据训练结果确定初选框。。
  4. 如权利要求2或3所述的方法,其中通过视觉的方法来实时监测3D打印喷头末端的跟踪定位,根据定位反馈并实时修正打印的算法。
  5. 如权利要求2-4任一项所述的方法,其中所述方法通过使用所述基于多轴联动控制和机器视觉反馈的3D打印装置实现。
  6. 如权利要求2-5任一项所述的方法,其中所述基于预优化的3D打印喷头末端实时跟踪定位方法进一步包括如下步骤:
    i)向3D打印机输入需要打印的模型;
    ii)机械臂检测的算法开启,检测机械臂的倾斜方向,并启动悬挂在机械臂周围且正对于倾斜方向的两个相机,组成双目视觉***;
    iii)通过该双目视觉***,运用颜色分割的预优化方法和/或快速多曝光融合方法,调节图像的质量;
    iv)根据CNN模型训练四个相机的样本,对目标物体进行判断和识别,并标记出目标物体所在的初选ROI区域;
    v)在该初选ROI区域设定的情况下,通过相关滤波目标跟踪算法,对打印喷头进行目标跟踪;
    vi)通过跟踪框实时提取目标,对该区域的图像运用自适应边界限制的快速最小二乘法滤波处理,保边平滑该图像;
    vii)根据处理后的特征,运用K-means算法进行分类,并获取喷头所在的类的图像,分割出目标;
    viii)通过Canny算法获取打印末端的外轮廓,再运用Hough直线检测方法,对该边缘图像进行直线检测,计算交汇的中点位置;
    优选地,所述颜色分割的预优化方法包括如下步骤:
    a)对输入的彩色图通过颜色转换函数,转到HSV颜色空间;
    b)在H、S和V空间分别按照预定的阈值范围进行颜色阈值的比对,实验中打印的喷头颜色可以有多种,例如选自红色、紫色、绿色、青色和蓝色的五种;
    当颜色处于该预定的阈值范围内时候,选择为有效值;如果不在该范围,则去掉该值;
    c)对获取的图像进行平滑优化处理,此处选择中值滤波,去掉单点的噪声;
    d)对平滑处理后的图像进行轮廓提取,即绘制每个独立物体的外接矩形,通过长宽比和面积来去掉多余的无关矩形内的目标;
    e)根据选择优化的结果,绘制选择后的图像,其中图像中只保留打印喷涂的图像。
  7. 一种快速多曝光融合方法,包括:在相机运行的过程中,持续采集图像,并不断计算图像的平均亮度,如果低于设定的亮度值,就启动快速多曝光融合方法对图像进行优化。
  8. 如权利要求7所述的方法,其中所述快速多曝光融合方法包括如下步骤:
    a)把输入图像转化为灰度图,其中相邻帧图像运用gamma校正进行不同程度的初始校正,对这些进行高低通滤波;
    b)获取这些图像中每个像素亮度最大的值,作为局部对比度权重;
    c)运用判别方法,对这些灰度图像进行亮度判断,亮度阈值为30,在[30,255-30]之间认为是合理的亮度区间,该区间的设定为1,其余的设定为0,从而求取曝光权重图;
    d)对输入的图像进行直方图均衡化处理,再运用中值滤波方法,获取初始颜色权重图,之后再运用膨胀和腐蚀操作,求取最终的颜色权重图;
    e)曝光权重图和颜色权重图相乘,再对二者结果进行归一化处理,归一化的结果再与局部对比度权重相乘,获得初始的融合权重,之后运用递归滤波方法对初始融合权重进行滤波,获得最终的融合权重;
    f)根据该融合权重,融合输入的图像,从而优化图像。
  9. 一种对打印喷头进行目标跟踪,特别是对运动中的打印喷头进行目标跟踪的方法,其包括使用相关滤波目标跟踪算法(KCF)进行跟踪;
    优选地,所述相关滤波目标跟踪算法中,首先对选定的ROI区域周围的多个区域提取Hog特征,再用循环矩阵进行求解下一帧选定的ROI区域;
    优选地,在获得一个新的选定的ROI区域时,首先,我们对该区域的图像采用自适应边界限制的快速最小二乘法滤波处理,以有效保持物体的边缘不受破坏,且其余的非边缘区域得到平滑。
  10. 如权利要求9所述的方法,其中边界限制通过容差机制自适应调节图像边界的区域,对图像进一步规整;
    优选地,所述相关滤波目标跟踪算法的方法首先提出一种有效的替代方案,来寻求定义在加权L2范数(1)上的目标函数的解,包括将目标函数分解为每个空间维度,并使用1维快速求解方法求解矩阵;然后,将该方法扩展到更一般的情况,通过求解加权范数L r(0<r<2)上定义的目标函数或使用在现有EP滤波器,该滤波器中不能实现的聚集数据项。
PCT/CN2020/090093 2020-05-13 2020-05-13 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法 WO2021226891A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/090093 WO2021226891A1 (zh) 2020-05-13 2020-05-13 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法
PCT/CN2021/093520 WO2021228181A1 (zh) 2020-05-13 2021-05-13 一种3d打印方法和装置
CN202110523612.9A CN113674299A (zh) 2020-05-13 2021-05-13 一种3d打印方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/090093 WO2021226891A1 (zh) 2020-05-13 2020-05-13 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法

Publications (1)

Publication Number Publication Date
WO2021226891A1 true WO2021226891A1 (zh) 2021-11-18

Family

ID=78526164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090093 WO2021226891A1 (zh) 2020-05-13 2020-05-13 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法

Country Status (1)

Country Link
WO (1) WO2021226891A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113876453A (zh) * 2021-12-08 2022-01-04 极限人工智能有限公司 基于机械臂的备窝方法、装置、手术机器人
CN113996757A (zh) * 2021-12-06 2022-02-01 河北工业大学 3d打印砂型实时感知与智能监控***
CN114183183A (zh) * 2021-11-22 2022-03-15 中煤科工集团沈阳研究院有限公司 一种修建煤矿井下密闭墙的装置及方法
CN114603849A (zh) * 2022-04-14 2022-06-10 南京铖联激光科技有限公司 一种增材制造用新式刮刀装置及铺粉方法
CN114606541A (zh) * 2022-03-15 2022-06-10 南通大学 一种基于玻璃微探针的二维结构微纳尺度快速打印***及其方法
CN114674391A (zh) * 2022-03-03 2022-06-28 华中科技大学 一种喷墨打印至像素坑内的墨水初始体积测量方法
CN115098961A (zh) * 2022-06-16 2022-09-23 燕山大学 基于抛流原理的除气u型流道优化方法
CN115107270A (zh) * 2022-05-25 2022-09-27 上海理工大学 消除彩色3d打印阶梯效应的着色边界微滴填充方法及装置
CN115254537A (zh) * 2022-08-18 2022-11-01 浙江工业大学 一种喷胶机器人的轨迹修正方法
CN117021574A (zh) * 2023-10-08 2023-11-10 哈尔滨理工大学 一种磁引导的复合材料可控长弧线路径打印***及方法
DE102022004677B3 (de) 2022-12-07 2024-02-01 Telegärtner Karl Gärtner GmbH Leiterplatten-Verbinder

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106264796A (zh) * 2016-10-19 2017-01-04 泉州装备制造研究所 一种基于多轴联动控制和机器视觉测量的3d打印***
CN206403893U (zh) * 2016-10-19 2017-08-15 泉州装备制造研究所 一种基于多轴联动控制和机器视觉测量的3d打印***
CN107718544A (zh) * 2017-10-29 2018-02-23 南京中高知识产权股份有限公司 带有视觉功能的3d打印装置及其工作方法
CN108381916A (zh) * 2018-02-06 2018-08-10 西安交通大学 一种非接触识别缺损形貌的复合3d打印***及方法
CN108638497A (zh) * 2018-04-28 2018-10-12 浙江大学 一种3d打印机打印模型外表面的全方位检测***和方法
US20180307206A1 (en) * 2017-04-24 2018-10-25 Autodesk, Inc. Closed-loop robotic deposition of material
CN208035371U (zh) * 2018-03-15 2018-11-02 杭州德迪智能科技有限公司 一种带机械手的fdm三维打印机
CN109080144A (zh) * 2018-07-10 2018-12-25 泉州装备制造研究所 基于中心点判断的3d打印喷头末端实时跟踪定位方法
CN109177175A (zh) * 2018-07-10 2019-01-11 泉州装备制造研究所 一种3d打印喷头末端实时跟踪定位方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106264796A (zh) * 2016-10-19 2017-01-04 泉州装备制造研究所 一种基于多轴联动控制和机器视觉测量的3d打印***
CN206403893U (zh) * 2016-10-19 2017-08-15 泉州装备制造研究所 一种基于多轴联动控制和机器视觉测量的3d打印***
US20180307206A1 (en) * 2017-04-24 2018-10-25 Autodesk, Inc. Closed-loop robotic deposition of material
CN107718544A (zh) * 2017-10-29 2018-02-23 南京中高知识产权股份有限公司 带有视觉功能的3d打印装置及其工作方法
CN108381916A (zh) * 2018-02-06 2018-08-10 西安交通大学 一种非接触识别缺损形貌的复合3d打印***及方法
CN208035371U (zh) * 2018-03-15 2018-11-02 杭州德迪智能科技有限公司 一种带机械手的fdm三维打印机
CN108638497A (zh) * 2018-04-28 2018-10-12 浙江大学 一种3d打印机打印模型外表面的全方位检测***和方法
CN109080144A (zh) * 2018-07-10 2018-12-25 泉州装备制造研究所 基于中心点判断的3d打印喷头末端实时跟踪定位方法
CN109177175A (zh) * 2018-07-10 2019-01-11 泉州装备制造研究所 一种3d打印喷头末端实时跟踪定位方法

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114183183A (zh) * 2021-11-22 2022-03-15 中煤科工集团沈阳研究院有限公司 一种修建煤矿井下密闭墙的装置及方法
CN113996757B (zh) * 2021-12-06 2022-12-13 河北工业大学 3d打印砂型实时感知与智能监控***
CN113996757A (zh) * 2021-12-06 2022-02-01 河北工业大学 3d打印砂型实时感知与智能监控***
CN113876453B (zh) * 2021-12-08 2022-02-22 极限人工智能有限公司 基于机械臂的备窝方法、装置、手术机器人
CN113876453A (zh) * 2021-12-08 2022-01-04 极限人工智能有限公司 基于机械臂的备窝方法、装置、手术机器人
CN114674391A (zh) * 2022-03-03 2022-06-28 华中科技大学 一种喷墨打印至像素坑内的墨水初始体积测量方法
CN114606541A (zh) * 2022-03-15 2022-06-10 南通大学 一种基于玻璃微探针的二维结构微纳尺度快速打印***及其方法
CN114606541B (zh) * 2022-03-15 2023-03-24 南通大学 一种基于玻璃微探针的二维结构微纳尺度快速打印***及其方法
CN114603849A (zh) * 2022-04-14 2022-06-10 南京铖联激光科技有限公司 一种增材制造用新式刮刀装置及铺粉方法
CN114603849B (zh) * 2022-04-14 2024-01-26 南京铖联激光科技有限公司 一种增材制造用新式刮刀装置及铺粉方法
CN115107270A (zh) * 2022-05-25 2022-09-27 上海理工大学 消除彩色3d打印阶梯效应的着色边界微滴填充方法及装置
CN115098961B (zh) * 2022-06-16 2023-11-07 燕山大学 基于抛流原理的除气u型流道优化方法
CN115098961A (zh) * 2022-06-16 2022-09-23 燕山大学 基于抛流原理的除气u型流道优化方法
CN115254537A (zh) * 2022-08-18 2022-11-01 浙江工业大学 一种喷胶机器人的轨迹修正方法
CN115254537B (zh) * 2022-08-18 2024-03-19 浙江工业大学 一种喷胶机器人的轨迹修正方法
DE102022004677B3 (de) 2022-12-07 2024-02-01 Telegärtner Karl Gärtner GmbH Leiterplatten-Verbinder
CN117021574A (zh) * 2023-10-08 2023-11-10 哈尔滨理工大学 一种磁引导的复合材料可控长弧线路径打印***及方法
CN117021574B (zh) * 2023-10-08 2024-01-09 哈尔滨理工大学 一种磁引导的复合材料可控长弧线路径打印***及方法

Similar Documents

Publication Publication Date Title
WO2021226891A1 (zh) 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法
WO2021228181A1 (zh) 一种3d打印方法和装置
CN111897332B (zh) 一种语义智能变电站机器人仿人巡视作业方法及***
Wang et al. A CNN-based adaptive surface monitoring system for fused deposition modeling
CN110497187B (zh) 基于视觉引导的太阳花模组装配***
CN107186708B (zh) 基于深度学习图像分割技术的手眼伺服机器人抓取***及方法
CN106423656B (zh) 基于点云与图像匹配的自动喷涂***及方法
JP6426143B2 (ja) 複雑な表面検査及び加工のための制御された自律型ロボットシステムおよびその方法
CN111679291B (zh) 基于三维激光雷达的巡检机器人目标定位配置方法
CN111421539A (zh) 一种基于计算机视觉的工业零件智能识别与分拣***
CN110509300A (zh) 基于三维视觉引导的钢箍加工上料控制***及控制方法
CN113276106B (zh) 一种攀爬机器人空间定位方法及空间定位***
CN101726498B (zh) 基于视觉仿生的铜带表面质量智能检测装置及方法
Kohn et al. Towards a real-time environment reconstruction for VR-based teleoperation through model segmentation
CN116673962B (zh) 一种基于Faster R-CNN和GRCNN的机械臂智能抓取方法及***
CN114037703B (zh) 基于二维定位和三维姿态解算的地铁阀门状态检测方法
Hsu et al. Development of a faster classification system for metal parts using machine vision under different lighting environments
CN110394422A (zh) 一种砂型打印过程在线监控装置及方法
CN108161930A (zh) 一种基于视觉的机器人定位***及方法
CN208092786U (zh) 一种以深度卷积神经网络为基础的零件分拣***
CN114879209A (zh) 一种用于机场跑道低成本异物检测分类的***和方法
CN116578035A (zh) 基于数字孪生技术的旋翼无人机自主降落控制***
CN109079777B (zh) 一种机械臂手眼协调作业***
CN112634362B (zh) 一种基于线激光辅助的室内墙面抹灰机器人视觉精确定位方法
CN113885504A (zh) 一种列检机器人自主巡检方法、***及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20935643

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20935643

Country of ref document: EP

Kind code of ref document: A1