CN109895086A - A kind of door of elevator snatch device and method of machine vision - Google Patents

A kind of door of elevator snatch device and method of machine vision Download PDF

Info

Publication number
CN109895086A
CN109895086A CN201711299737.8A CN201711299737A CN109895086A CN 109895086 A CN109895086 A CN 109895086A CN 201711299737 A CN201711299737 A CN 201711299737A CN 109895086 A CN109895086 A CN 109895086A
Authority
CN
China
Prior art keywords
image
door
elevator
robot
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711299737.8A
Other languages
Chinese (zh)
Inventor
覃争鸣
杨旭
李康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Original Assignee
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou filed Critical Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority to CN201711299737.8A priority Critical patent/CN109895086A/en
Publication of CN109895086A publication Critical patent/CN109895086A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

The invention discloses a kind of door of elevator snatch device and method of machine vision, described device is made of vision system device, image pick-up card, robot and industrial personal computer.The systems approach is divided into three modules: S1, vision positioning module;S2, coordinate system conversion module;S3, Design of System Software module.

Description

A kind of door of elevator snatch device and method of machine vision
Technical field
The present invention relates to Robot visual location and crawl fields, grab more particularly to a kind of door of elevator of machine vision Lift device and method.
Background technique
Robot is widely used in industrial production line at present, but much be all teaching personnel teaching programming or from Line programming is lower to complete some preset fixed movements and function.If the environment around workpiece is changed, just very It may cause robot task failure.
Machine vision technique and robot technology are combined, using the positioning function of machine vision there is robot certainly Oneself " eyes " are to obtain the location information of workpiece, the work such as guided robot completes crawl, carries.
Currently, robot crawl positioning system mostly uses PLC controller greatly to realize operation, some passes through various sensors The parameter of detection is needed to assist workpiece grabbing in the process, this is easy for being interfered by extraneous factor, and causes required hardware more, makes System cost increases.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of door of elevator snatch devices of machine vision And method, the device is larger for door of elevator size, the problem that one camera positioning accuracy is poor, unstable, proposes using double The method of camera guarantees the accuracy and stability of positioning.
It is another object of the present invention to provide a kind of using a kind of above-mentioned door of elevator snatch device of machine vision The positioning and grasping means of realization are divided into three modules: S1, vision positioning module;S2, coordinate system conversion module;S3, system Software design module.
The technical solution of present invention solution above-mentioned technical problem are as follows:
A kind of door of elevator snatch device and method of machine vision, which is characterized in that described device includes vision system Device, image pick-up card, industrial personal computer and robot.Wherein, the vision system device is made of camera, camera lens and light source, As the visual sensor in system, for capturing the status image of door of elevator supplied materials;The image pick-up card is mounted on In industrial personal computer PCI slot, for acquiring image that vision system device captures and being entered into industrial personal computer;The industry control Machine is the core of system, carries image procossing, interface display and the vital task of communication, for by the result after calculation processing Feed back to robot;The robot after its handgrip reaches image capture position, schemes for completing crawl work by issuing As back to camera control, it acquires image to acquisition signal.
The positioning is divided into three modules: S1, vision positioning module with grasping means;S2, coordinate system conversion module;S3, Design of System Software module.
The present invention have compared with prior art it is below the utility model has the advantages that
1, using the method for double camera, it is poor, unstable to solve the larger one camera positioning accuracy of door of elevator size Problem guarantees the accuracy and stability of positioning;
2, the feature based on Cognex VisionPro visual development software design door of elevator identifies and positions program, tool Have good locating effect, can guided robot quickly and accurately grab door of elevator and complete feeding task, it is real to meet industry The demand of border application;
Detailed description of the invention
Fig. 1 is that a kind of structure of a specific embodiment of the door of elevator snatch device of machine vision of the invention is shown It is intended to.
Fig. 2 is a kind of module flow diagram of the door of elevator snatch method of machine vision of the invention.
Fig. 3 is the schematic illustration of vision positioning.
Fig. 4 is the flow diagram of vision positioning module.
Fig. 5 is the mounting means schematic diagram of vision positioning system.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited In this.
A kind of door of elevator snatch device of machine vision of the invention is by vision system device, image pick-up card, industry control Machine and robot are constituted, in which:
Referring to Fig. 1, the vision system device includes camera 1 and 2, camera lens 3 and 4, light source 5 and 6;Wherein, described Two cameras 1 and 2, camera lens 3 and 4 are mounted on sucker gripper one end of robot 9, light source 5 be mounted on camera 1 and camera lens 3 in the following, Light source 6 is mounted below camera 2 and camera lens 4;Camera 1 and camera lens 3, camera 2 and camera lens 4 are used to shoot in its working range Image captures elevator door supplied materials status image to be treated;Light source 5 and 6 is used for polishing, enhances the contrast of image;It is described Image pick-up card 7 be mounted in the PCI slot of industrial personal computer 8, for acquire image that vision system device captures and by its It is input to industrial personal computer 8;After 9 handgrip of robot reaches image capture position, issues image pick-up signal and control 1 He of camera Camera 2 acquires image, and the image of acquisition sends industrial personal computer 8 to by gigabit Ethernet;Industrial personal computer 8 passes through image after obtaining image Processing Algorithm calculates position offset and sends to robot 9, and guided robot 9 adjusts the crawl pose of oneself, if fixed Position failure then needs to manually adjust the repositioning of door-plate position, and industrial personal computer 8 and camera 1 and 2, industrial personal computer 8 and robot 9 all pass through Ethernet is connected, and industrial personal computer 8 is the core of system, carries image procossing, interface display and the vital task of communication.In figure 10 be robot control cabinet, is a part of robot 9, and 11 be the supplied materials table top of door of elevator.
Referring to fig. 2, the process of the door of elevator snatch method of a kind of machine vision, comprising the following steps:
S1: vision positioning module realizes capture, positioning to door of elevator supplied materials status image, vision positioning principle And process is described in detail as follows:
S11: vision positioning principle and process: referring to Fig. 3, the normal place of door of elevator supplied materials is the position that offset is zero It sets, door of elevator is placed on supplied materials table top by worker by hanging device in actual production, and there is change in the position placed every time Change.Since door of elevator size is 2120*430mm, the slightly larger meeting after coordinate origin rotates of angle calculation deviation exists The other end of door-plate causes relatively large deviation, and the longer deviation of door-plate is bigger;And the limited view of camera, add different door-plates Acquire image when noise interference, can make one camera setting accuracy not enough, stability it is not high.To solve this problem, this is System using double camera positioning method, the straight line angles by seeking two point fittings apart from each other guarantee positioning precision and Stability;Door of elevator one end is closer apart from ground when because of robot feeding, then two cameras can be only installed at the one of sucker gripper End, therefore two points take the left upper apex A and B of the rectangular opening of door of elevator one end two respectively.
Referring to Fig. 1, two cameras acquire two rectangular opening images of door of elevator one end, the A point in 1 visual field of camera respectively By finding characteristic edge La1And La2Intersection point extract, the B point in 2 visual field of camera is by finding characteristic edge Lb1And Lb2Intersection point mention It takes.Robot workpiece coordinate system OrOrigin build characteristic point A in0Place, selected point A calculate shift offset, and it is inclined that straight line AB calculates angle Shifting amount.Base position door of elevator image is acquired first, calculates point A0(x0,y0)、B0Coordinate and straight line A0B0Angle [alpha]0(its Middle A0For origin x0=0, y0=0) base position coordinate and base position angle, and are respectively saved as;Then when each supplied materials, Acquisition image calculates the angle [alpha] of this time point A (x, y) and straight line AB, it is known that offset Δ X=x-x0=x, Δ Y=y-y0=y, Δ α=α-α0
Referring to fig. 4, vision positioning system process is as follows: after robot gripper reaches image capture position, issuing image It acquires signal control camera and acquires image, the image of acquisition sends industrial personal computer to by gigabit Ethernet;Industrial personal computer obtains image Position offset is calculated by image processing algorithm afterwards and is sent to robot, guided robot adjusts the crawl position of oneself Appearance needs to manually adjust door-plate position and relocates if positioning failure.
S12: vision system mounting means: being that ± 50mm camera fields of view is can be controlled according to discharge position allowable error range is carried out 160*120mm or so;Characteristic point A, the B distance of extraction is 260mm, then the optical center distance of two cameras is maintained at 260 ± 5mm;By It is limited to robot work maximum height limitation, camera operating distance no more than actual conditions such as 470mm, devises as shown in Figure 5 Vision positioning system mounting means.
S2: conversion coordinate system: due to system acquisition to image use image coordinate system, and robot is to pass through tune Whole workpiece coordinate system realizes the adjustment of pose, so needing to establish the relationship of image coordinate system and workpiece coordinate system.It is wrapped in system Include 3 coordinate systems: image coordinate system, world coordinate system and robot workpiece coordinate system, Z axis coordinate determine by laser sensor, Vision system is the positioning in two-dimensional surface, and 3 coordinate systems are all two-dimensional coordinate systems, does not consider Z axis.
S21: camera calibration: during the vision positioning it needs to be determined that position of door of elevator surface point and its position in the picture The relationship set, that is, need to construct the geometrical model of camera imaging, and camera calibration is the process of solving model parameter.Camera calibration It is divided into linear calibration and nonlinear calibration.Generally existing perspective distortion and radial distortion in imaging process.Wherein, perspective distortion master If because belonging to linear distortion caused by camera optical axis and door of elevator plane out of plumb;Radial distortion be mainly camera lens from Caused by the technique of body, including barrel distortion and pincushion distortion, belong to nonlinear distortion.Pass is corresponded in order to accurately reflect it System, improves the accuracy of vision positioning, the image of acquisition needs distortion correction, using nonlinear calibration method.Nonlinear calibration is By establishing the nonlinear model of video camera imaging, according to pair of the pixel coordinate and world coordinates of characteristic point in scaling board image It should be related to the inside and outside parameter and distortion parameter for solving video camera.
Nonlinear calibration is using the CogCalibCheckerBoardTool tool in VisionPro, 1 He of camera in system The scaling method of camera 2 is identical, illustrates scaling method by taking camera 1 as an example below.
(1) acquire scaling board image: scaling board is divided into gridiron pattern scaling board and mesh point scaling board, due to working as chessboard case marker Fixed board is better than mesh point scaling board using stated accuracy when " detailed gridiron pattern " search attribute, so using gridiron pattern scaling board. Acquisition environment and the tool configuration of scaling board image are consistent when working normally with camera, according to the requirement system of camera fields of view and precision Having made plane sizes is 270*190mm, and sizing grid is the scaling board of 5mm*5mm.
(2) it extracts calibration information: after obtaining scaling board image, being extracted using CogCalibCheckerBoardTool tool The characteristic point of image;Calibration mode selection is non-linear, and characteristic search pattern selects detailed gridiron pattern.
(3) parameter of camera imaging model, including perspective camera calibration: are solved with radial distortion model parameter and linearly Conversion model parameters;When system is run, the correction to fault image is completed using the model acquired, and image coordinate system is converted For world coordinate system.RMS error is the root-mean-square error of characteristic point position, and the smaller calibration effect of error is better, and calculation formula is such as Under:
Wherein: N is the number of the characteristic point found;I is characterized number a little;E is the location error of some characteristic point, etc. In this point converted by camera calibration the world coordinates that acquires with do not correct up till now put true world coordinates between at a distance from. RMS error has been divided into 5 grades in CogCalibCheckerBoardTool tool: (0,0.1) is outstanding, (0.1,0.5) To be good, (0.5,2) be qualification, (2,5) be it is poor, (5 ,+∞) be it is excessively poor.
S22: hand and eye calibrating: hand and eye calibrating completes the conversion of world coordinate system to robot workpiece coordinate system.
S3: Design of System Software: vision positioning is the key link of robot crawl feeding, is calculated in the core of image procossing It must assure that high-precision, high efficiency and high stability in method, undoubtedly to do a large amount of experiment in this way, increase the difficulty of system development And the period.
This system be based on VisionPro using NET language carry out secondary development, guarantee system stability and efficiently Property, while accelerating the application development period, reduce development cost.
This module includes S31, extracts feature point target and S32, calculates two steps of offset, is described in detail as follows:
S31: extract feature point target: this step includes following 3 processes:
(1) it converts coordinate system: image is acquired using CogAcqFifoTool tool.The coordinate system that image uses at this time is figure As coordinate system, the conversion of coordinate system is carried out according to above-mentioned coordinate system conversion method, makes the calculating of offset all in workpiece coordinate system Lower completion can be used directly for robot in this way.
(2) template matching: template matching can find out the seat between characteristic image and its detection target in geometric transformation Mark corresponding relationship.The characteristics of image position that variation due to carrying out discharge position can be such that camera acquires accordingly changes, using template matching Method realize to the coarse positioning of characteristic edge region of search, then extracted and schemed with the CogPMAlignTool tool in VisionPro The feature locations information of piece and the coordinate correspondence relationship that itself and characteristic edge region of search are established by CogFixtureTool tool, The extraction to characteristic edge will not be influenced supplied materials door-plate position is changed, and guarantee that A point and stablizing for B point extract. There are many template matching algorithms for CogPMAlignTool tool, include mainly PatQuick, PatMax and PatFlex, due to right The requirement of calculating speed and CogPMAlignTool tool are used only to coarse positioning, so using PatQuick algorithm.
The template of selection will have uniqueness in whole image, and what guarantee can be unique, stable is matched to this template, The right angle for selecting two sides intersection rectangular opening in such a system is matching template.The matching point of CogPMAlignTool tool palette For two stages, i.e., offline template training and online template matching.It is covered first using the image in CogPMAlignTool tool Film editing machine, which is drawn, oneself to be needed the feature templates of training to carry out off-line training and saves;Then collected new images are carried out Template matching finds template and calculates similarity;The feature locations variation letter that finally CogPMAlignTool tool matching is arrived Breath conveys CogFixtureTool tool to realize template relationship corresponding with the coordinate of characteristic edge region of search.
(3) extract characteristic point A, B: then the intersection point on two fit characteristic sides is sought on fit characteristic side first.Characteristic edge is logical It crosses and seeks the fitting of the point on side, seeking for putting in characteristic edge is real using the slide calliper rule of CogFindLine tool in VisionPro It is existing, 40 characteristic points in characteristic edge are extracted using the slide calliper rule that 40 search lengths are 14mm, projected length is 4mm in this system, Interference of the outside noise to image is considered simultaneously, 5 deviation maximum characteristic points are ignored at fit characteristic side, to improve fitting Accuracy.Two characteristic edge Restlt.GetLine () that CogFindLine tool is fitted are passed to CogIntersect-LineLineTool tool seeks the intersection point of two characteristic edges, CogIntersectLineLineTool work There are two outlet terminal X and Y for tool, that is, indicate the coordinate value of intersection point.
S32: calculate offset: as shown in figure 3, door of elevator is placed on crawl normal place, robot gripper is moved to Picture-taking position acquires image, calculates A through above-mentioned image procossing0(x0,y0) point coordinate;As base position coordinate, using letter Number CogMath.AnglePointPoint () calculates straight line A0B0Angle [alpha]0As references angle.When carrying out discharge position When variation, base position coordinate is subtracted with the coordinate of the point A (x, y) calculated at this time, can be acquired
Δ X=x-x0=x (3)
Δ Y=y-y0=y (4)
The angle [alpha] that straight line AB is calculated using function CogMath.AnglePointPoint () subtracts its references angle i.e. It can acquire
Δ α=α-α0 (5)
Above-mentioned is the preferable embodiment of the present invention, but embodiments of the present invention are not limited by the foregoing content, His any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention, should be The substitute mode of effect, is included within the scope of the present invention.

Claims (2)

1. a kind of door of elevator snatch device of machine vision, which is characterized in that the device includes that vision system device, image are adopted Truck, robot and industrial personal computer;Wherein,
The vision system device is used to capture the status image of door of elevator supplied materials, including camera, camera lens and light source;
Described image capture card is used to acquire the image that vision system device captures and is entered into industrial personal computer;
The industrial personal computer issues robot, guided robot adjustment crawl pose for handling image, and by the result after calculating;
The robot after its handgrip reaches image capture position, will issue image pick-up signal for completing crawl work It controls camera and acquires image.
2. a kind of positioning that the door of elevator snatch device using a kind of machine vision described in claim 1 is realized and crawl side Method, which is characterized in that this method is made of three modules:
S1, vision positioning module;
S2, coordinate system conversion module;
S3, Design of System Software module.
CN201711299737.8A 2017-12-10 2017-12-10 A kind of door of elevator snatch device and method of machine vision Pending CN109895086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711299737.8A CN109895086A (en) 2017-12-10 2017-12-10 A kind of door of elevator snatch device and method of machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711299737.8A CN109895086A (en) 2017-12-10 2017-12-10 A kind of door of elevator snatch device and method of machine vision

Publications (1)

Publication Number Publication Date
CN109895086A true CN109895086A (en) 2019-06-18

Family

ID=66940912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711299737.8A Pending CN109895086A (en) 2017-12-10 2017-12-10 A kind of door of elevator snatch device and method of machine vision

Country Status (1)

Country Link
CN (1) CN109895086A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047623A (en) * 2019-12-30 2020-04-21 芜湖哈特机器人产业技术研究院有限公司 Efficient template positioning algorithm system for vision-aided positioning
CN111798524A (en) * 2020-07-14 2020-10-20 华侨大学 Calibration system and method based on inverted low-resolution camera
CN113119107A (en) * 2021-03-05 2021-07-16 广东工业大学 Method for planning adjustable adsorption points and adsorption system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047623A (en) * 2019-12-30 2020-04-21 芜湖哈特机器人产业技术研究院有限公司 Efficient template positioning algorithm system for vision-aided positioning
CN111047623B (en) * 2019-12-30 2022-12-23 芜湖哈特机器人产业技术研究院有限公司 Efficient template positioning algorithm system for vision-aided positioning
CN111798524A (en) * 2020-07-14 2020-10-20 华侨大学 Calibration system and method based on inverted low-resolution camera
CN111798524B (en) * 2020-07-14 2023-07-21 华侨大学 Calibration system and method based on inverted low-resolution camera
CN113119107A (en) * 2021-03-05 2021-07-16 广东工业大学 Method for planning adjustable adsorption points and adsorption system

Similar Documents

Publication Publication Date Title
CN108177150A (en) Door of elevator positioning and grabbing device and the method for view-based access control model
CN108109174A (en) A kind of robot monocular bootstrap technique sorted at random for part at random and system
CN108827154A (en) A kind of robot is without teaching grasping means, device and computer readable storage medium
CN104394656B (en) Automatic mounting system
CN104552341B (en) Mobile industrial robot single-point various visual angles pocket watch position and attitude error detection method
CN109895086A (en) A kind of door of elevator snatch device and method of machine vision
CN107808401A (en) The hand and eye calibrating method of the one camera of mechanical arm tail end
CN104002602B (en) The laser activation device and laser activation method of function are corrected with machining accuracy
CN108032011B (en) Initial point guiding device and method are stitched based on laser structure flush weld
CN110293559B (en) Installation method for automatically identifying, positioning and aligning
CN110514906B (en) High-precision microwave cavity filter debugging method and system based on hand-eye coordination
CN104626169A (en) Robot part grabbing method based on vision and mechanical comprehensive positioning
CN111791226B (en) Method and device for realizing assembly through robot and robot
CN109877833A (en) A kind of industrial vision robot method for rapidly positioning
WO2022040983A1 (en) Real-time registration method based on projection marking of cad model and machine vision
CN105488807A (en) Method for calibrating and rectifying telecentric lens
CN107804708A (en) A kind of pivot localization method of placement equipment feeding rotary shaft
CN110202560A (en) A kind of hand and eye calibrating method based on single feature point
CN108436905A (en) A kind of coordinate acquisition correction system of vending machine
CN105773661B (en) Workpiece translational motion rotates scaling method under horizontal machine people's fixed camera
CN106938463A (en) A kind of method of large plate positioning crawl
CN111993420A (en) Fixed binocular vision 3D guide piece feeding system
JP2012161850A (en) Robotic device, device, program and method for detecting position
CN206643574U (en) A kind of automatic connector assembly robot system
CN116594351A (en) Numerical control machining unit system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190618

WD01 Invention patent application deemed withdrawn after publication