CN113554713A - Hole-making visual positioning and detecting method for airplane skin mobile robot - Google Patents
Hole-making visual positioning and detecting method for airplane skin mobile robot Download PDFInfo
- Publication number
- CN113554713A CN113554713A CN202110793390.2A CN202110793390A CN113554713A CN 113554713 A CN113554713 A CN 113554713A CN 202110793390 A CN202110793390 A CN 202110793390A CN 113554713 A CN113554713 A CN 113554713A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- sin
- cos
- hole
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 230000000007 visual effect Effects 0.000 title claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 238000004458 analytical method Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 8
- 238000005553 drilling Methods 0.000 claims description 5
- 239000012636 effector Substances 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000007689 inspection Methods 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a hole-making visual positioning and detecting method for an aircraft skin mobile robot, which comprises the following steps: the method comprises the following steps: calibrating a binocular camera installed on a terminal flange of the mobile robot; step two: calibrating a binocular camera and a mobile robot; step three: establishing a matching template; step four: establishing a pose relation between a robot coordinate system and an airplane coordinate system; step five: hole making; step six: detecting dimple depth; the invention is characterized in that: visual positioning through two mesh cameras has improved aircraft skin system hole efficiency and precision by a wide margin, has especially solved the system hole difficult problem of the thin skin of curved surface, but the automated inspection dimple degree of depth after the system hole moreover satisfies the requirement of aircraft skin high accuracy system hole and detection, has improved whole work efficiency.
Description
Technical Field
The invention relates to the technical field of aircraft manufacturing, in particular to a hole-making visual positioning and detecting method for an aircraft skin mobile robot.
Background
The aerospace product is large in appearance, numerous in parts and complex in coordination relation, the requirements on processing and assembling of the parts are very strict, riveting is a very common link mode in the process of assembling the aerospace product, the hole-making quality of the aerospace product has a huge influence on safety and service life of an airplane, at present, more and more holes are automatically made by using robots, but how to accurately position processing points is a crucial difficulty, the currently adopted positioning technology still needs to be improved in precision, particularly, errors generated by movement of the robots are difficult to compensate, in addition, the hole-making counter sinking detection in the current stage is mainly carried out by manual operation, the working efficiency is low, meanwhile, the detection precision is not high, a plurality of uncertain factors are added to the quality of the product, and the quality and the stability of the product are limited.
Disclosure of Invention
The invention aims to overcome the defects and provides a hole-making visual positioning and detecting method for an aircraft skin mobile robot.
The technical scheme adopted by the invention for realizing the purpose is as follows: the method for visually positioning and detecting the hole making of the aircraft skin mobile robot comprises the following steps:
the method comprises the following steps: calibration of a binocular camera mounted on a terminal flange of a mobile robot
Shooting the standard calibration plate by the binocular camera through different angles to obtain internal reference and external reference of the binocular camera and a pose relation between the binocular camera, and completing calibration of the binocular camera;
step two: calibration of binocular camera and mobile robot
Shooting the binocular vision detection calibration plate through the calibrated binocular camera to obtain the pose of the characteristic points of the binocular vision detection calibration plate on a camera coordinate system, establishing the pose relation between the camera coordinate system and a robot flange coordinate system by using a laser tracker, obtaining the pose relation between the camera coordinate system and the robot coordinate system through the known pose relation between the robot flange coordinate system and the robot coordinate system, and completing the calibration of the binocular camera and the mobile robot;
step three: creation of matching templates
Shooting the characteristic points of the reference datum holes on the aircraft skin through the calibrated binocular camera, and establishing a matching template after image processing of pictures;
step four: establishment of pose relationship between robot coordinate system and airplane coordinate system
Shooting the characteristic points of the reference holes on the aircraft skin through the binocular camera again, and establishing a pose relation between a robot coordinate system and an aircraft coordinate system through the pose relation between a camera coordinate system and the robot coordinate system;
step five: making holes
Moving an end effector of the mobile robot to the position of a reference hole of an aircraft skin for shooting, matching the obtained feature point of the reference hole with a matching template, determining whether the feature point meets the requirement through parameter judgment, and making a hole by taking the reference hole as the reference if the feature point meets the requirement;
step six: dimple depth detection
After hole making, shooting a processing hole on an aircraft skin through a detection camera arranged on the side end of the end effector, recognizing the inner diameter and the outer diameter of the processing hole to further calculate the dimple depth, and determining whether the dimple depth meets the requirement through parameter judgment.
And the constraint conditions of the pose relationship between the camera coordinate system and the robot flange coordinate system are as follows:
the picture for establishing the matching template needs to select a clear and complete picture, the characteristic region is divided through picture correction, affine transformation, picture preprocessing, threshold processing and Blob analysis, and the matching template is established after Gaussian filtering, morphological analysis, contrast increase, edge detection and XLD analysis.
And matching the matching template by adopting an image pyramid hierarchical search strategy.
The method comprises the steps of carrying out picture correction and ROI analysis on a picture of a reference hole shot during hole making, segmenting a characteristic region, filtering white noise in the picture through Gaussian filtering, enhancing an edge contour through contrast enhancement, and matching with a matching template.
The pose relation model of the robot flange coordinate system and the robot coordinate system is as follows:
bPx=Transx-fPy(cos(A)*sin(B)-cos(C)*sin(A)*sin(B))+fPz*(sin(A)*sin(C)+cos(A)*cos(C)*sin(B))+fPx*cos(B)*cos(C)
bPy=Transy+fPy(cos(A)*cos(C)+sin(A)*sin(B)*sin(C))+fPz*(cos(C)*sin(A)-cos(A)*sin(B)*sin(C))+fPx*cos(B)*sin(C)
bPz=Transz-fPx*sin(B)+fPz*cos(A)*cos(B)+fPy*cos(B)*sin(A)。
the invention is characterized in that: visual positioning through two mesh cameras has improved aircraft skin system hole efficiency and precision by a wide margin, has especially solved the system hole difficult problem of the thin skin of curved surface, but the automated inspection dimple degree of depth after the system hole moreover satisfies the requirement of aircraft skin high accuracy system hole and detection, provides whole work efficiency.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
FIG. 2 is a search strategy image pyramid employed in the present invention.
Detailed Description
As shown in FIG. 1, the invention relates to a visual positioning and detection method for hole making of an aircraft skin mobile robot, which comprises the following steps:
the method comprises the following steps: calibration of a binocular camera mounted on a terminal flange of a mobile robot
Shooting the standard calibration plate by the binocular camera through different angles to obtain internal reference and external reference of the binocular camera and a pose relation between the binocular camera, and completing calibration of the binocular camera;
the standard calibration plate is a high-precision square plate with 7X7 holes, is arranged in the visual field range (about 1/3) of a binocular camera to shoot a certain number of pictures (more than 15 pictures), and executes calibration of the binocular camera through software to obtain internal parameters [ Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, Imageheight ] and external parameters [ X, Y, Z, A, B, C ] of the camera and the pose relationship between the binocular cameras, wherein the Focus: a focal length; kappa: distortion of the distortion; sx, Sy: a pixel size; cx, Cy: coordinates of the center point of the image; ImageWidth, ImageHeight: width and height of the image; x, Y, Z: a position coordinate; a, B, C: yaw angle, pitch angle, roll angle;
after the calibration of the binocular camera is completed, the focal length, the aperture and the pose of the binocular camera cannot be changed;
step two: calibration of binocular camera and mobile robot
Shooting the binocular vision detection calibration plate through the calibrated binocular camera to obtain the pose of the characteristic points of the binocular vision detection calibration plate on a camera coordinate system, and establishing the pose relation between the camera coordinate system and a robot flange coordinate system by using a laser tracker, wherein the use of the laser tracker further improves the precision of a relation model compared with the traditional calibration method, and the constraint condition of the pose relation between the camera coordinate system and the robot flange coordinate system is as follows:(fp: the coordinates of the lower point of the flange coordinate system,the camera coordinate system is the pose under the flange coordinate,cp: coordinates of a lower point of the camera coordinate system) and through the known pose relationship between the robot flange coordinate system and the robot coordinate system,
bPx=Transx-fPy(cos(A)*sin(B)-cos(C)*sin(A)*sin(B))+fPz*(sin(A)*sin(C)+cos(A)*cos(C)*sin(B))+fPx*Cos(B)*cos(C)
bPy=Transy+fPy(cos(A)*cos(C)+sin(A)*sin(B)*sin(C))+fPz*(cos(C)*sin(A)-cos(A)*sin(B)*sin(C))+fPx*cos(B)*sin(C)
bPz=Transz-fPx*sin(B)+fPz*cos(A)*cos(B)+fPy*cos(B)*sin(A)
acquiring a pose relation between a camera coordinate system and a robot coordinate system, and completing calibration of a binocular camera and the mobile robot;
the calibration of the binocular camera and the establishment of the binocular camera and robot flange relation model can be realized only by once calibration when products are put into a factory for debugging, and the calibration is not influenced by the walking and moving of the robot among different processing stations, so that compared with the traditional machining production mode, the hole-making precision is improved, and the use value of the robot is greatly improved;
step three: creation of matching templates
Shooting the characteristic points of a reference datum hole on an airplane skin through a calibrated binocular camera, establishing a matching template after image processing of the image, selecting a clear and complete image for establishing the matching template, segmenting a characteristic region through image correction, affine transformation, image preprocessing, threshold processing and Blob analysis, establishing the matching template after Gaussian filtering, morphological analysis, contrast increasing, edge detection and XLD analysis, providing a basis for matching of subsequent matching templates, and adopting an image pyramid layered search strategy for matching of the matching template shown in FIG. 2, wherein the matching mode reduces the operation complexity, and improves the speed and the search accuracy;
step four: establishment of pose relationship between robot coordinate system and airplane coordinate system
Shooting the characteristic points (the number is more than or equal to 3) of the reference holes on the aircraft skin by a binocular camera, calculating the value of each characteristic point under the robot coordinate system by combining theoretical values and the pose relationship between the camera coordinate system and the robot coordinate system, so as to establish the pose relationship between the robot coordinate system and the aircraft coordinate system, sending related data to a control system of the mobile robot by an OPC protocol, and then calculating the positions of the processing holes by shooting the reference holes again, so as to guide the mobile robot to perform precise hole making, wherein the pose relationship model of the robot flange coordinate system and the robot coordinate system is as follows:
bPx=Transx-fPy(cos(A)*sin(B)-cos(C)*sin(A)*sin(B))+fPz*(sin(A)*sin(C)+cos(A)*cos(C)*sin(B))+fPx*cos(B)*cos(C)
bPy=Transy+fPy(cos(A)*cos(C)+sin(A)*sin(B)*sin(C))+fPz*(cos(C)*sin(A)-cos(A)*sin(B)*sin(C))+fPx*cos(B)*sin(C)
bPz=Transz-fPx*sin(B)+fPz*cos(A)*cos(B)+fPy*cos(B)*sin(A)bPx,bPy,bPzrespectively, the coordinates of the point P in the Base coordinate system.
fPx,fPy,fPzRespectively, the coordinates of the point P in the flange coordinate system.
Transx,Transy,TranszRespectively representing translation amounts A, B and C of a coordinate system on X, Y and Z axes to respectively represent a yaw angle, a pitch angle and a rolling angle;
step five: making holes
The terminal actuator who removes mobile robot shoots to the benchmark hole position of aircraft skin, the picture is corrected through the picture, ROI analysis, segment the characteristic region out, white noise in the filtering image through the gauss filtering, and match the characteristic point and the matching template of the benchmark hole that obtains through enhancing contrast after strengthening the edge profile and matching template, judge through the parameter and confirm whether the characteristic point satisfies the demands, satisfy can use the benchmark hole to carry out the drilling as the benchmark, can accurately realize the position location (X, Y, Z) of benchmark hole through three-dimensional reconstruction characteristic point, consequently, do not receive the influence of factors such as aircraft skin curved surface, can guide accurate drilling of robot, wherein, the gaussian filter function who adopts:
sigma represents the data distribution, the larger the value of sigma, the more dispersed the data distribution, the smaller the sigma, the more aggregated the data,
step six: dimple depth detection
After the hole making is completed, a detection camera installed on the side end of the end effector moves to a target position to shoot a processing hole in an aircraft skin, the detection camera needs to be calibrated through a standard calibration plate before being used, the picture is corrected, and meanwhile, the pixel distance physical distance and the pixel physical distance calculation formula are obtained through calculation:
PDpixel_rrefers to the actual physical distance of a single pixel, DrRefers to the actual distance, DpixelFinger pixel distance
The calibrated detection camera carries out processing hole shooting, and through a sub-pixel level image processing algorithm, the inner diameter R1 and the outer diameter R2 of a processing hole are identified so as to further calculate the dimple depth H, so that the detection precision depth calculation formula is greatly improved:
H=(R2-R1)*tanθ
h indicates dimple depth, R2Outer circle profile radius, R1Inner circle profile radius, tan θ dimple angle tangent.
According to the invention, the hole making efficiency and precision of the aircraft skin are greatly improved through the visual positioning of the binocular camera, the hole making problem of the curved surface thin skin is particularly solved, the dimple depth can be automatically detected after hole making, the requirements of high-precision hole making and detection of the aircraft skin are met, and the overall working efficiency is improved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical solutions and the inventive concepts of the present invention within the technical scope of the present invention.
Claims (6)
1. The method for visually positioning and detecting the hole making of the aircraft skin mobile robot is characterized by comprising the following steps of:
the method comprises the following steps: shooting a standard calibration plate by calibration binocular cameras of the binocular cameras installed on a flange at the tail end of the mobile robot through different angles to obtain internal reference and external reference of the binocular cameras and a pose relation between the binocular cameras, and completing calibration of the binocular cameras;
step two: calibration of binocular camera and mobile robot
Shooting the binocular vision detection calibration plate through the calibrated binocular camera to obtain the pose of the characteristic points of the binocular vision detection calibration plate on a camera coordinate system, establishing the pose relation between the camera coordinate system and a robot flange coordinate system by using a laser tracker, obtaining the pose relation between the camera coordinate system and the robot coordinate system through the known pose relation between the robot flange coordinate system and the robot coordinate system, and completing the calibration of the binocular camera and the mobile robot;
step three: creation of matching templates
Shooting the characteristic points of the reference datum holes on the aircraft skin through the calibrated binocular camera, and establishing a matching template after image processing of pictures;
step four: establishment of pose relationship between robot coordinate system and airplane coordinate system
Shooting the characteristic points of the reference holes on the aircraft skin through the binocular camera again, and establishing a pose relation between a robot coordinate system and an aircraft coordinate system through the pose relation between a camera coordinate system and the robot coordinate system;
step five: making holes
Moving an end effector of the mobile robot to the position of a reference hole of an aircraft skin for shooting, matching the obtained feature point of the reference hole with a matching template, determining whether the feature point meets the requirement through parameter judgment, and making a hole by taking the reference hole as the reference if the feature point meets the requirement;
step six: dimple depth detection
After hole making, shooting a processing hole on an aircraft skin through a detection camera arranged on the side end of the end effector, recognizing the inner diameter and the outer diameter of the processing hole to further calculate the dimple depth, and determining whether the dimple depth meets the requirement through parameter judgment.
3. the aircraft skin mobile robot holing visual positioning and detection method according to claim 1, characterized in that a clear and complete picture is selected for the picture for establishing the matching template, the feature region is segmented through picture rectification, affine transformation, picture preprocessing, threshold processing and Blob analysis, and the matching template is established after gaussian filtering, morphological analysis, contrast increasing, edge detection and XLD analysis.
4. The aircraft skin mobile robot drilling visual positioning and detection method according to claim 1, wherein the matching of the matching templates adopts an image pyramid hierarchical search strategy.
5. The visual positioning and detection method for hole making of the aircraft skin mobile robot as claimed in claim 1, wherein the picture of the reference hole taken during hole making is subjected to picture correction and ROI analysis to segment the characteristic region, white noise in the image is filtered out through Gaussian filtering, and the white noise is matched with the matching template after the edge contour is enhanced through contrast enhancement.
6. The aircraft skin mobile robot drilling visual positioning and detecting method as claimed in claim 1, wherein the pose relationship model of the robot flange coordinate system and the robot coordinate system is:
bPx=Transx-fPy(cos(A)*sin(B)-cos(C)*sin(A)*sin(B))+fPz*(sin(A)*sin(C)+cos(A)*cos(C)*sin(B))+fPx*cos(B)*cos(C)
bPy=Transy+fPy(cos(A)*cos(C)+sin(A)*sin(B)*sin(C))+fPz*(cos(C)*sin(A)-cos(A)*sin(B)*sin(C))+fPx*cos(B)*sin(C)
bPz=Transz-fPx*sin(B)+fPz*cos(A)*cos(B)+fPy*cos(B)*sin(A)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110793390.2A CN113554713A (en) | 2021-07-14 | 2021-07-14 | Hole-making visual positioning and detecting method for airplane skin mobile robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110793390.2A CN113554713A (en) | 2021-07-14 | 2021-07-14 | Hole-making visual positioning and detecting method for airplane skin mobile robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113554713A true CN113554713A (en) | 2021-10-26 |
Family
ID=78131697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110793390.2A Pending CN113554713A (en) | 2021-07-14 | 2021-07-14 | Hole-making visual positioning and detecting method for airplane skin mobile robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113554713A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114565570A (en) * | 2022-02-18 | 2022-05-31 | 成都飞机工业(集团)有限责任公司 | Weak-rigidity skin dimple hole depth measuring method, device, equipment and medium |
CN117282718A (en) * | 2023-11-24 | 2023-12-26 | 无锡出新环保设备有限公司 | Ultrasonic degreasing device for electroplated part before plating |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260715A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Depth map processing method, small obstacle detection method and system |
CN111496289A (en) * | 2020-04-08 | 2020-08-07 | 清华大学 | Multifunctional integrated aviation assembly hole making system and use method thereof |
-
2021
- 2021-07-14 CN CN202110793390.2A patent/CN113554713A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260715A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Depth map processing method, small obstacle detection method and system |
CN111496289A (en) * | 2020-04-08 | 2020-08-07 | 清华大学 | Multifunctional integrated aviation assembly hole making system and use method thereof |
Non-Patent Citations (1)
Title |
---|
袁培江;陈冬冬;王田苗;刘元伟;曹双倩;蔡鹦;汤海洋;: "基于双目视觉测量***的孔位补偿研究", 航空制造技术, no. 04, 15 February 2018 (2018-02-15) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114565570A (en) * | 2022-02-18 | 2022-05-31 | 成都飞机工业(集团)有限责任公司 | Weak-rigidity skin dimple hole depth measuring method, device, equipment and medium |
CN114565570B (en) * | 2022-02-18 | 2024-03-15 | 成都飞机工业(集团)有限责任公司 | Weak-rigidity skin countersink hole depth measurement method, device, equipment and medium |
CN117282718A (en) * | 2023-11-24 | 2023-12-26 | 无锡出新环保设备有限公司 | Ultrasonic degreasing device for electroplated part before plating |
CN117282718B (en) * | 2023-11-24 | 2024-02-27 | 无锡出新环保设备有限公司 | Ultrasonic degreasing device for electroplated part before plating |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767423B (en) | mechanical arm target positioning and grabbing method based on binocular vision | |
CN111604598B (en) | Tool setting method of mechanical arm feeding type laser etching system | |
CN111775146B (en) | Visual alignment method under industrial mechanical arm multi-station operation | |
CN110103217B (en) | Industrial robot hand-eye calibration method | |
CN109612390B (en) | Large-size workpiece automatic measuring system based on machine vision | |
CN112223285B (en) | Robot hand-eye calibration method based on combined measurement | |
CN108766894B (en) | A kind of chip attachment method and system of robot vision guidance | |
CN107214703B (en) | Robot self-calibration method based on vision-assisted positioning | |
CN111121655B (en) | Visual detection method for pose and aperture of coplanar workpiece with equal large hole patterns | |
CN110146038B (en) | Distributed monocular camera laser measuring device and method for assembly corner of cylindrical part | |
CN113554713A (en) | Hole-making visual positioning and detecting method for airplane skin mobile robot | |
CN110223345B (en) | Point cloud-based distribution line operation object pose estimation method | |
CN106251353A (en) | Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof | |
CN111322967B (en) | Centering method for assembly process of stepped shaft and hole | |
CN113421291B (en) | Workpiece position alignment method using point cloud registration technology and three-dimensional reconstruction technology | |
CN114742883A (en) | Automatic assembly method and system based on plane type workpiece positioning algorithm | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
CN113137627B (en) | Machining and positioning method for cooling air film hole of flame tube of aircraft engine | |
Wang et al. | Multilayer positioning strategy for tubesheet welding robot based on point cloud model | |
CN111292375B (en) | Helicopter blade mark point identification matching method based on position constraint | |
CN117496401A (en) | Full-automatic identification and tracking method for oval target points of video measurement image sequences | |
CN116958264A (en) | Bolt hole positioning and pose estimation method based on three-dimensional vision | |
CN114166144B (en) | Calibration method for machining profile after gear chamfering machining and clamping | |
CN110415247A (en) | A kind of robotization die forging process forging dystopy identifies and positions method | |
CN114953548B (en) | Tire mold cleaning method, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |