CN105528789B - Robot visual orientation method and device, vision calibration method and device - Google Patents

Robot visual orientation method and device, vision calibration method and device Download PDF

Info

Publication number
CN105528789B
CN105528789B CN201510900027.0A CN201510900027A CN105528789B CN 105528789 B CN105528789 B CN 105528789B CN 201510900027 A CN201510900027 A CN 201510900027A CN 105528789 B CN105528789 B CN 105528789B
Authority
CN
China
Prior art keywords
module
image
speck
contour line
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510900027.0A
Other languages
Chinese (zh)
Other versions
CN105528789A (en
Inventor
王晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN HENGKETONG ROBOT CO., LTD.
Original Assignee
Shenzhen Hengketong Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hengketong Robot Co Ltd filed Critical Shenzhen Hengketong Robot Co Ltd
Priority to CN201510900027.0A priority Critical patent/CN105528789B/en
Publication of CN105528789A publication Critical patent/CN105528789A/en
Application granted granted Critical
Publication of CN105528789B publication Critical patent/CN105528789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention proposes a kind of robot visual orientation method and device, this method include:Target image is obtained, the target image is pre-processed;Image Segmentation Methods Based on Features is carried out to image according to preset partitioning parameters, image after segmentation is filtered, the speck of connected domain Detection and Extraction composition characteristic mark is carried out to filtered image, processing is filtered to the speck of extraction, judge whether filtered speck number meets preset number, if not, then readjust partitioning parameters, again it is detected, if so, identification speck contour line, judges whether the speck contour line identified matches with preset template contours line, if so, the characteristic indication that output identifies.This method passes through adjust automatically partitioning parameters so that the profile after segmentation meets initial setting up condition, adapts to the different image detection of illumination condition.In addition, it is also proposed that a kind of vision calibration method and device.

Description

Robot visual orientation method and device, vision calibration method and device
Technical field
The present invention relates to robot fields, more particularly to a kind of robot visual orientation method and device, vision calibration Method and apparatus.
Background technology
Workpiece localization method has machinery positioning, photoelectric sensor, Magnetic Induction device, vision positioning in industrial robot system Deng, wherein machinery positioning and inductor positioning have the advantages that it is of low cost, but positioning accuracy it is poor, flexibility it is poor.And vision is fixed Position has precision height, the good advantage of flexibility.
Traditional visual processing method is all the method scanned using template characteristic matching algorithm or spot, but template Feature matching method computation complexity is big, needs to establish multi-template data for robot vision application, and setting is complicated, constantly with The appearance adjustment template of new situation, and spot detection method, detection success rate is by environmental lighting conditions and camera parameters Be affected, both the above method is required for that image parameter and template often is arranged, and is needed as long as identification marking changes Template is reset, otherwise there is a phenomenon where leakage identification or can cannot be identified, the robust performance of system is poor.
Invention content
Based on this, it is necessary to the problem of often resetting template for above-mentioned needs, it is proposed that one kind is simply not required to The robot visual orientation method and device of template are often read in again.
A kind of robot visual orientation method, the method includes:S1:Target image is obtained, and to the target image It is pre-processed;S2:Image Segmentation Methods Based on Features is carried out to the processed images of step S1 according to preset partitioning parameters;S3:To step S2 Processed image is filtered;S4:Connected domain Detection and Extraction composition characteristic mark is carried out to the processed images of step S3 The speck of will;S5:Processing is filtered to the speck;S6:Judge whether filtered speck number meets preset speck Number, if it is not, then pressing the partitioning parameters in preset rules set-up procedure S2, repeats the above steps if so, entering step S7 S2-S6;S7:Identify the speck contour line;S8:The speck contour line identified described in judgement is with preset template contours line No matching;If matching, enters step S9;S9:Export the characteristic indication identified.
A kind of Robot visual location device, described device include:Acquisition module, for obtaining target image, and to institute Target image is stated to be pre-processed;Divide module, for carrying out feature to pretreated image according to preset partitioning parameters Segmentation;Filter module, the image for being crossed to segmentation resume module are filtered;Detection module, for filter module Processed image carries out the speck of connected domain Detection and Extraction composition characteristic mark;Filtering module, for being carried out to the speck Filtration treatment;Judgment module, for judging whether filtered number of spots meets preset number, if it is not, then notifying to divide Module adjusts partitioning parameters according to default rule;Identification module, if meeting preset number for filtered number of spots Then identify the speck contour line;Matching module, for judging the speck contour line identified and preset template contours Whether line matches;Output module, if the speck contour line for identifying and preset template contours lines matching, export identification The characteristic indication gone out.
The above method and device pre-process the target image by obtaining target image;According to preset Partitioning parameters carry out Image Segmentation Methods Based on Features to image, are filtered to the image after segmentation, are connected to filtered image The speck of domain Detection and Extraction composition characteristic mark is filtered processing to the speck of extraction, judges that filtered speck number is It is no to meet preset number, if it is not, then readjusting partitioning parameters, detected again, if so, identification speck contour line, Judge whether the speck contour line identified matches with preset template contours line, if so, the characteristic indication that output identifies. When speck number does not meet preset number, partitioning parameters are readjusted automatically, need not read in template parameter again.The party Method passes through adjust automatically partitioning parameters so that the profile after segmentation meets initial setting up condition, can be after iteration several times Characteristic indication identifies, adapts to the different image detection of illumination condition, feature can be also realized under conditions of illumination is unstable The identification of mark realizes vision system and runs steadily in the long term in addition, this method avoid manual intervention adjusting parameter.
A kind of vision calibration method, this method include:Identify the characteristic indication on workpiece;Pass through mechanical hand-motion camera It is moved to above the characteristic indication, records mobile physical coordinates;The corresponding image of the physical coordinates is handled, is known Do not go out the coordinate of characteristic indication in the picture;According to the physical coordinates of record and corresponding image coordinate, characteristic indication is determined Mapping relations between image coordinate and physical coordinates.
A kind of vision calibration device, the device include:Landmark identification module, for identification characteristic indication on workpiece;It sits Logging modle is marked, for being moved to above the characteristic indication by mechanical hand-motion camera, records mobile physical coordinates; Coordinate identification module identifies the seat of characteristic indication in the picture for handling the corresponding image of the physical coordinates Mark;Relationship determination module is used for the physical coordinates according to record and corresponding image coordinate, determines the image coordinate of characteristic indication Mapping relations between physical coordinates.
Above-mentioned vision calibration method and device are then taken the photograph by mechanical hand-motion by identifying the characteristic indication on workpiece As head is moved to above characteristic indication, mobile physical coordinates are recorded, the corresponding image of physical coordinates is handled, is identified The coordinate of characteristic indication in the picture determines the image of characteristic indication according to the physical coordinates of record and corresponding image coordinate Mapping relations between coordinate and physical coordinates.The scaling method is easy, is directly demarcated using actual product, marking process It is performed fully automatic, avoids manual intervention, calibrating parameters are accurate and reliable.
Description of the drawings
Fig. 1 is the flow chart of robot visual orientation method in one embodiment;
Fig. 2 is the schematic diagram of the star topology mode of connected domain in one embodiment;
Fig. 3 is the schematic diagram that characteristic indication rotates in one embodiment;
Fig. 4 be one embodiment in schematic diagram from calculating method to angle;
Fig. 5 is the structural schematic diagram of angle concordance list in one embodiment;
Fig. 6 A to 6C are the characteristic indication schematic diagram under different illumination intensity in one embodiment;
Fig. 7 is the flow chart of robot visual orientation method in another embodiment;
Fig. 8 is the flow chart of robot visual orientation method in further embodiment;
Fig. 9 is the method flow diagram that speck contour line is identified in one embodiment;
Figure 10 is to judge the whether matched method flow diagram of contour line in one embodiment;
Figure 11 is the flow chart of vision calibration method in one embodiment;
Figure 12 is the flow chart of vision calibration method in another embodiment;
Figure 13 is the schematic diagram of ideal coordinates and the difference of actual coordinate in one embodiment;
Figure 14 is the apparatus structure block diagram of Robot visual location in one embodiment;
Figure 15 is the apparatus structure block diagram of Robot visual location in another embodiment;
Figure 16 is the apparatus structure block diagram of Robot visual location in further embodiment;
Figure 17 is the structure diagram of identification module in one embodiment;
Figure 18 is the structure diagram of matching module in one embodiment;
Figure 19 is the structure diagram of vision calibration device in one embodiment;
Figure 20 is the structure diagram of vision calibration device in another embodiment.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
As shown in Figure 1, in one embodiment it is proposed that a kind of robot visual orientation method, this method include:
Step S1 obtains target image, and is pre-processed to target image.
In the present embodiment, target image is obtained by photographic subjects object, the target image of acquisition is pre-processed, had Body, target image can be pre-processed by sub-sampling and isolated point filtering method, sub-sampling method refers to according to one Fixed rule every one picture of several pixel extractions as valid pixel, for example, wait level with etc. every 3 pixels of vertical intervals One pixel of extraction is preserved;It is multiple to advantageously reduce calculating in this way for the image that data volume diminution can be obtained by sub-sampling Miscellaneous degree improves calculating speed.Isolated point filtering method is filtered according to picture quality selection using linear filter or morphologic filtering Fall isolated noise point in image.
Step S2 carries out Image Segmentation Methods Based on Features according to preset partitioning parameters to the processed images of step S1.
Specifically, will pass through pretreated target image carries out Image Segmentation Methods Based on Features, characteristics of image according to preset partitioning parameters Segmentation is exactly the technology for dividing the image into several regions specific, with unique properties and proposing interesting target.At this In embodiment, to picture carry out Image Segmentation Methods Based on Features purpose be in order to allow characteristic indication color and global context color distinguish, with Just characteristic indication is extracted, for example, characteristic indication is allowed to become white pattern after Image Segmentation Methods Based on Features, and the background colour on its periphery becomes Black is prepared for subsequent extracted characteristic indication.
Step S3 is filtered the processed images of step S2.
In the present embodiment, image filtering processing is i.e. under conditions of retaining image detail feature as possible to target image Noise is inhibited.There will necessarily be some noise spots in image background after Image Segmentation Methods Based on Features, after to Image Segmentation Methods Based on Features Image be filtered, noise spot i.e. noise spot can be removed.
Step S4 carries out step S3 processed images the speck of connected domain Detection and Extraction composition characteristic mark.
Specifically, connected domain detection is exactly the connected domain extracted being come out with same pixel extracting section in image Referred to as Blob (speck).Connected domain detection is carried out to filtered processed image to extract the bright of composition characteristic mark Spot.In the following argument structure body array that Blob features are stored entirely in.Wherein, array location is defined on LTRegion1 knots In structure body, Blob distributions are stored in marking pattern flagmap.
typedef struct RegionSurround
{double KAngle;// object angle
double Length;// object length
double Width;// object width
double Cenx;// centre coordinate
double Ceny;// centre coordinate
}RegionSurround;
typedef struct LTRegion 1
{
int leftPOINT[maxlmageSizeY];The every row left end point in // region
int rightPOINT[maxlmageSizeY];The every row right endpoint in // region
int RegionNum;// number of regions, LTRegion [0] effective coverages .RegionNum number
RECT Surround_Rect;// encirclement frame
int LTRegion_ID;// region ID number LTRegion [0] regions .LTRegion_ID total number
int Region_shape;// region shape code
int Regiondeleted;// region shape validity code, 0 effective coverage, 1 inactive area
int Angle_longAxis;// regional perspective
double Fill_rate;
int*flagmap;// area flag figure
RegionSurround RegionRect;The smallest enclosing box in // region
}LTRegion1;
Step S5 is filtered processing to the speck.
Specifically, may not be characteristic indication by the speck some that connected domain is extracted, need to be filtered row to speck It removes.First, it carries out an initial filtering to speck according to size and area features to exclude, size exclusion method is wrapped using judgement Whether the width and height for enclosing the minimum rectangle of speck are realized in a rational range.Area filter method is marked using calculating How much the number of corresponding spot label pixel is realized in will figure.
Step S6, judges whether filtered speck number meets preset number, if so, S7 is entered step, if it is not, The partitioning parameters in preset rules set-up procedure S2 are then pressed, repeat the above steps S2-S6;
In the present embodiment, judge whether filtered speck number meets preset number, preset number here is One range, for example range can be set as 100-120, by judging the number of spots detected whether in preset range It is interior, if so, the step of entering identification speck contour line;If it is not, then illustrating that the characteristic indication of identification is inaccurate, need to adjust again Partitioning parameters in the rapid S2 of synchronizing, the step of then re-executing above-mentioned identification.The process for adjusting partitioning parameters is according to certain Strategy setting partitioning parameters adjusted value, for example be adjusted according to adjustment direction and adjusting step.Specifically, adjusted value is added Upper original partitioning parameters carry out Image Segmentation Methods Based on Features as new partitioning parameters, according to the new partitioning parameters to image, often adjust Once, adjustment counter adds 1, can pre-set the threshold value of adjustment calculator, judges whether the number for adjusting partitioning parameters surpasses Preset threshold value is crossed, if it is not, then carrying out Image Segmentation Methods Based on Features processing to image according to new partitioning parameters;If adjustment counter is more than to set Fixed threshold value, then be directly entered the output stage of function result, and entire power function returns to NoObject states.
Step S7 identifies speck contour line.
In the present embodiment, when number of spots meets preset number, illustrate that the speck of extraction meets the requirements, next It needs to identify the contour line that the contour line of speck and extraction identify.Specifically, firstly, it is necessary to the center of determination connected domain, is incited somebody to action Point is split connected domain in a manner of star topology as shown in Figure 2 centered on the center of connected domain, and is sat using pole Mark mode samples the connected domain edge after segmentation with preset angle (such as 1 degree), by the rectangular co-ordinate number after sampling According to (coordinate of i.e. corresponding each vector) with polar coordinates corner-turn counterclockwise in linear array.Specifically, contour line Characteristic storage is in following structure.
Step S8, judges whether the speck contour line identified matches with preset template contours line;If matching, enters Step S9 terminates if mismatching.
In the present embodiment, after identifying speck contour line, judge the speck contour line identified and preset template wheel Whether profile matches, if matching, exports the characteristic indication identified, if mismatching, terminates to return to NoObject states.Tool Body, it is carried out by the average value for the contour line polar coordinates radius that will be detected and the average value of template contours line polar coordinates radius Compare, if not meeting, the polar coordinates center of connected domain is redefined according to comparison result, and according in the polar coordinates redefined The heart redefines the polar data of contour line, and the polar data redefined and template polar data are carried out difference meter Calculation obtains difference value T, and whether the difference value T judged is less than preset value, if so, illustrating the two matching, if it is not, explanation is not Matching.
In addition, for postrotational workpiece, with the rotation of workpiece, characteristic indication also rotates with, as shown in figure 3, Drumheads are exactly the characteristic indication on workpiece in figure.In order to realize quick location feature mark angle, in one embodiment In, using outline method vector angle matching process, this method can handle big angle rotary characteristic indication image, it is only necessary to Increase by 5% image processing time.Specifically, first, then the contour line normal angles vector of calculation template image will obtain Template contours line normal angles vector be stored in a linear list as template parameter;Secondly, it calculates and is taken turns in target image The normal angles vector ObjectAngle [] of profile uses a kind of global normal angle computational methods, that is, uses contour line here Vectorial Vector (VectorX, VectorY) calculates normal angles vector, and normal angle here is indicated with vector, i.e. normal angles Vector, the calculating process of normal angle as shown in figure 4, for example, using adjacent contour line vector Vec1-Vec2, Vec2-Vec3, Difference of the normal angle perpendicular to adjacent contour line vector.Normal angles vector is stored in a cycle linear list.Finally, according to The normal angle ObjectAngle [] that mode sequentially changes image indexes index in linear list start element, finds out one A index value index=IA, this index value make the normal angle minimum differential of real image and template image cumulative and minimum, will Index value IA is multiplied by angular dimension proportionality coefficient, is converted to characteristic indication rotation angle value, and angle concordance list is as shown in Figure 5.
Step S9 exports the characteristic indication identified.
In the present embodiment, when identifying speck contour line with preset template contours lines matching, illustrate to have succeeded Characteristic indication is had identified, the signature identification output that will identify that.
In the present embodiment, by obtaining target image, and target image is pre-processed, is joined according to preset segmentation It is several that Image Segmentation Methods Based on Features is carried out to image, the image after segmentation is filtered, connected domain detection is carried out to filtered image The speck for extracting composition characteristic mark, is filtered processing to the speck of extraction, judges whether filtered speck number meets Preset speck number is detected again if it is not, then readjusting partitioning parameters, if so, identification speck contour line, sentences Whether the disconnected speck contour line identified matches with preset template contours line, if so, the characteristic indication that output identifies.When When number of spots does not meet preset number, partitioning parameters are readjusted automatically, the image being adapted under different illumination conditions Detection, the identification of characteristic indication can be also realized in the case where illumination is unstable, actual test case is as shown in fig. 6, establish mould The intensity of light source condition used when plate is as shown in Figure 6A, and light-source brightness becomes in through actual application after a period of time Change, as shown in figs. 6b and 6c.This method passes through adjust automatically partitioning parameters so that the profile after segmentation meets initial setting up item Part can identify characteristic indication after iteration several times, adapt to the different image detection of illumination condition, in illumination shakiness It can also realize that the identification of characteristic indication realizes vision in addition, this method avoid manual intervention adjusting parameter under conditions of fixed System is run steadily in the long term.
As shown in fig. 7, in one embodiment, further including before step S1:
Step S01 reads in template image.
Specifically, before the target image of photographic subjects object, first, the template image shot in advance, template image are read in It is for use as the image of reference standard, it is the whether correct standard of characteristic indication for weighing subsequent extracted.
Step S02, according to the template image drawing template establishment parameter of reading, template parameter includes the number of spots and mould of extraction Web wheel profile.
Specifically, according to template image, the template parameter in image is extracted.Specifically, template parameter includes template image The number of middle pixel, the number of spots of composition characteristic mark, the height of characteristic indication, width, area, duty ratio, gray scale threshold At least one of the contour line of value and template.
In one embodiment, above-mentioned steps S1 is to obtain target image, and sub-sampling is carried out to target image.
Specifically, sub-sampling is the side every one pixel of several pixel extractions as valid pixel according to certain strategy Method can obtain the image of a relative decrease after sub-sampling, convenient for reducing computation complexity, improve calculating speed.
As shown in figure 8, in one embodiment, after the speck contour line identified is with the template contours lines matching Further include:
Step S90, reads again target image, extracts the contour line of target image.
In the present embodiment, due to before in order to improve the speed of calculating, having carried out sub-sampling to target image and having obtained The image of one relative decrease, so the contour line extracted is the contour line of sub-sampling image, precision and resolution ratio are less Height extracts the contour line of characteristic indication again so needing to read again initial target image on the basis of original image, To improve precision and resolution ratio.Specifically, utilizing the wheel of characteristic indication on the initial value extraction original target image of outline Sub-pixel method and marking competition law may be used in profile, contour line extraction method.
Step S91 carries out the contour line of the target image of extraction according to the parameter of the speck contour line of step S7 identifications Fitting.
In the present embodiment, using the parameter of the contour line of the image by sub-sampling extracted in the step s 7, to carrying The contour line of the original target image taken is fitted.Specifically, the geometry of the contour line using the image extracted in S7 steps Parameter is fitted the contour line of the original target image of extraction using least square method.For example, circular characteristic indication makes Circular fit model is built with Circle Parameters radius and central coordinate of circle, rectangular and straight line mark uses fitting a straight line, image after fitting Positioning accuracy can reach 0.1 pixel.
As shown in figure 9, in one embodiment, the step of identifying speck contour line, includes:
Step S71 determines the center of connected domain.
Specifically, initial value, that is, central value of the identification firstly the need of calculating connected domain of speck contour line is carried out, if profile Only there are one geometric center of the center in contour line that connected domain is then arranged then to be connected to if there is multiple profile hearts at the center of line Domain is centrally disposed at the mean center of multigroup contour line.
S72:Using the center of the connected domain as polar pole, the connected domain side is spaced in preset angle Edge carries out polar coordinates acquisition.
In the present embodiment, behind the center for having determined connected domain, using the center of the connected domain as polar pole, with Preset angle interval (such as 1 degree) acquires polar data on the edge of connected domain.Specifically, using as shown in Figure 2 The mode of star topology divides contour line, and polar acquisition is carried out at the edge of the contour line.
S73:By collected polar data in polar coordinates corner sequential storage to linear array counterclockwise.
Specifically, by the polar data after sampling by polar coordinates corner counterclockwise (0-360 degree) sequential storage to linearly In array VectorX [] and VectorY [].Convenient for adjusting the position of collected characteristic indication later.
As shown in Figure 10, in one embodiment, the step S8 includes:
Step S81 judges the average value and template contours line polar coordinates half of the speck contour line polar coordinates radius detected Whether the difference of the average value of diameter is less than preset distance, if so, entering step S9;If it is not, then entering step S82.
Specifically, according to collected each polar data, the average value of speck contour line polar coordinates radius is calculated, is sentenced Whether the difference of the average value of the disconnected speck contour line polar coordinates radius detected and the average value of template contours line polar coordinates radius In reasonable range, maximum error distance is pre-set, if the difference of the two is less than preset distance, illustrates current connected domain Polar coordinates center it is suitable, need not adjust, can directly export the characteristic indication identified.If the difference of the two is more than preset Distance illustrates that connected domain polar coordinates center determining at present is improper, needs to re-start adjustment.
Step S82 redefines the polar coordinates center of connected domain according to comparison result.
Specifically, if template contours line is a part for the contour line being calculated, illustrates that the two mismatches, need root The polar coordinates center of the contour line of characteristic indication is repositioned according to the result after comparison.
Step S83 redefines the polar data of contour line according to the polar coordinates center redefined.
Specifically, centered on redefining polar coordinates center, again with the polar coordinates at the central data contour line edge Data, and will be in the polar data that redefined storage to linear data group.
The polar data redefined and template polar data are carried out Difference Calculation and obtain difference value by step S84, Judge whether the difference value is less than preset value, if so, matching.
Specifically, the polar data redefined and template polar data are carried out Difference Calculation, a difference is obtained Score value T, judges whether difference value T is less than preset value, if so, illustrate collected contour line and template contours lines matching, Export the characteristic indication identified;If it is not, illustrating the improper of contour line extraction, the state of NoObject is returned.
As shown in figure 11, in one embodiment, a kind of vision calibration method is provided, this method includes:
Step 1102, the characteristic indication on workpiece is identified.
In the present embodiment, it is necessary first to by vision positioning identify workpiece on characteristic indication, characteristic indication be for Labeling operation point position, for example, dispensing, drilling position.There are many shapes of characteristic indication, can be round, can also be It is rectangular, it can also be cross etc..
Step 1104, it is moved to above the characteristic indication by mechanical hand-motion camera, records mobile physics and sit Mark.
Specifically, manipulator refers to the certain holding functions that can imitate human hand and arm, to press fixed routine crawl, carry The automatic pilot of object or operation instrument.It is moved to above characteristic indication by mechanical hand-motion camera, is recorded in this The physical coordinates of manipulator movement during a.
Step 1106, the corresponding image of the physical coordinates is handled, identifies the seat of characteristic indication in the picture Mark.
Specifically, after the physical coordinates of record manipulator movement, the corresponding image of the physical coordinates is handled, is identified Go out the image coordinate of characteristic indication in the picture, and carries out record storage.
Step 1108, according to the physical coordinates of record and corresponding image coordinate, determine the image coordinate of characteristic indication with Mapping relations between physical coordinates.
Specifically, mobile manipulator m times in a horizontal plane, every time so that index point is in inside camera image, Record the physical coordinates moved every time (Xw, Yw) by host query kinematic axis, at the same to the corresponding image of physical coordinates into Row processing, identifies the coordinate (Un, Vn) of the characteristic indication in corresponding image.In order to obtain the image coordinate of characteristic indication with Mapping relations between physical coordinates need at least to record 4 groups of physical coordinates and corresponding image coordinate.Utilize following formula (3) and (4) calculate coefficient a11, a12, a21 and a22 in formula.Wherein, formula (3) and (4) be by formula (1) and (2) it is derived by.
Xw=a11*U+a12*V+Tx (1)
Yw=a21*U+a22*V+Ty (2)
DXw=a11*dU+a12*dV (3)
DYw=a21*dU+a22*dV (4)
Theoretically the conversion relation of image coordinate and physical coordinates needs to realize by a11, a12, a21, a22, Tx and Ty, But actually control system only needs the relatively unique of practical work piece and the standard sample workpiece of teaching, therefore, it may be used A kind of simplification calibration strategy of relative displacement coordinate, at this time, it is only necessary to 3 above-mentioned cameras of movement, using Generalized Least Square Method calculates a11, a12, a21, a22 coefficients.
In the present embodiment, it by identifying the characteristic indication on workpiece, is then moved to by mechanical hand-motion camera Above characteristic indication, mobile physical coordinates are recorded, the corresponding image of physical coordinates is handled, identifies that characteristic indication exists Coordinate in image determines that the image coordinate of characteristic indication is sat with physics by recording at least 4 groups of physical coordinates and image coordinate Mapping relations between mark.The scaling method is easy, is directly demarcated using actual product, and marking process is performed fully automatic, Manual intervention is avoided, calibrating parameters are accurate and reliable.
As shown in figure 12, in one embodiment, above-mentioned vision calibration method further includes:
Step 1110, the coordinate difference of desired characteristics mark and practical work piece characteristic indication is calculated by Differential positioning algorithm.
It is that will present situation as shown in figure 13, wherein Mark1 when placing practical work piece specifically, after the completion of calibration It is index point ideally with Mark2, and Mark11And Mark21Index point 1 when being actual working state and index point 2, (Xgt, Ygt) is the coordinate position of the operating point (dispensing, drilling) of arbitrary mechanical arm on calibration sample, and actual product Operating point position is (Xgr, Ygr), and the coordinate (Xgr, Ygr) of practical operating point can be calculated by formula (5) and (6). Wherein, (Xg ', Yg ') and (Xg, Yg) difference Mark1,2 points of physical coordinates line midpoint physical coordinates, they can be according to upper It states formula (1)-(4) to be calculated, a is the angle of actual product and calibration sample.
Xgr=cos (a) * (Xgt-Xg)-sin (a) * (Ygt-Yg)+(Xg '-Xg) (5)
Ygr=sin (a) * (Xgt-Xg)+cos (a) * (Ygt-Yg)+(Yg '-Yg) (6)
Step 1112, the coordinate difference being calculated is converted into the compensating approach value of end effector of robot physical coordinates Specifically, the variable quantity for operating coordinate i.e. coordinate difference (dXg, dYg) is transferred to robot control system, wherein dXg=Xgr- Xgt, dYg=Xgr-Xgt.The Coordinate Adjusting of desired characteristics mark is practical work piece according to the coordinate difference by robot control system The coordinate of characteristic indication is converted into the compensating approach of end effector of robot physical coordinates according to the coordinate difference being calculated Value, specifically, according between the coordinate data of the standard sample coordinate data of record and the corresponding workpiece actually identified and they Difference value determine the mapping between the image coordinate of characteristic indication and the physical coordinates correction-compensation of end effector of robot Relationship.To realize the robot manipulation, such as dispensing, drilling, the turn of the screw etc. that arbitrarily place workpiece.It is processed in actual robot On equipment, there is device for pre-positioning after workpiece feeding, within +/- 1mm, the distortion of industrial vision camera lens exists positioning accuracy 0.1% or so, after the above vision calibration process, +/- 0.005-0.05mm can be improved in positioning accuracy.
As shown in figure 14, in one embodiment it is proposed that a kind of Robot visual location device, the device include:
Acquisition module 1402 is pre-processed for obtaining target image, and to the target image;
Divide module 1404, for carrying out Image Segmentation Methods Based on Features to pretreated image according to preset partitioning parameters;
Filter module 1406, the image for being crossed to segmentation resume module are filtered;
Detection module 1408, for carrying out connected domain Detection and Extraction composition characteristic mark to the processed image of filter module Speck;
Filtering module 1410, for being filtered processing to the speck;
Judgment module 1412, for judging whether filtered number of spots meets preset number, if it is not, then notice point It cuts module and partitioning parameters is adjusted according to default rule;
Identification module 1414, for meeting preset number if filtered number of spots if identify the speck profile Line;
Matching module 1416, for judge the speck contour line identified and preset template contours line whether Match;
Output module 1418, if the speck contour line for identifying and preset template contours lines matching, export knowledge The characteristic indication not gone out.
As shown in figure 15, in one embodiment, above-mentioned apparatus further includes:
Module 1400 is read in, for reading in template image.
Creation module 1401, for the template image drawing template establishment parameter according to the reading, the template parameter includes The speck number and template contours line of extraction.
In one embodiment, acquisition module is additionally operable to obtain target image, and sub-sampling is carried out to target image.
As shown in figure 16, in one embodiment, above-mentioned apparatus further includes:
Extraction module 1420 extracts the contour line of the target image for reading again target image.
Fitting module 1422, the original mesh of the parameter of the speck contour line for being identified according to identification module to the extraction The contour line of logo image is fitted.
As shown in figure 17, in one embodiment, identification module includes:
Center calculation module 1414a, the center for determining the connected domain.
Coordinate acquisition module 1414b is used for using the center of the connected domain as polar pole, with preset angle It is spaced in the connected domain edge and carries out coordinate acquisition.
Memory module 1414c is used for collected coordinate data with polar coordinates corner sequential storage counterclockwise to linearly In array.
As shown in figure 18, in one embodiment, matching module includes:
Radius judgment module 1416a, the average value and template of the speck contour line polar coordinates radius for judging to detect Whether the difference of the average value of contour line polar coordinates radius is less than preset distance.
Center determining module 1416b, if the speck contour line polar coordinates radius average value for detecting and the template The average difference of contour line polar coordinates radius is more than preset distance, then the polar coordinates of connected domain are redefined according to comparison result Center.
Contour line index module 1416c, the pole for redefining contour line according to the polar coordinates center redefined are sat Mark data.
Computing module 1416d, for the polar data redefined and template polar data to be carried out Difference Calculation Difference value is obtained, judges whether difference value is less than preset value, if so, notice output module executes the feature mark that output identifies Will.
As shown in figure 19, in one embodiment it is proposed that a kind of vision calibration device, the device include:
Landmark identification module 1902, for identification characteristic indication on workpiece.
Coordinate record module 1904, for being moved to above the characteristic indication by mechanical hand-motion camera, record Mobile physical coordinates.
Coordinate identification module 1906 identifies characteristic indication for handling the corresponding image of the physical coordinates Coordinate in the picture.
Relationship determination module 1908 is used for the physical coordinates according to record and corresponding image coordinate, determines characteristic indication Image coordinate and physical coordinates between mapping relations.
As shown in figure 20, in one embodiment, above-mentioned apparatus further includes:
Coordinate difference computing module 1910, for calculating desired characteristics mark and practical work piece feature by Differential positioning algorithm The coordinate difference of mark.
Adjust module 1912, the benefit for the coordinate difference being calculated to be converted into end effector of robot physical coordinates Repay correction value.
Several embodiments of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously Cannot the limitation to the scope of the claims of the present invention therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect range.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (16)

1. a kind of robot visual orientation method, the method includes:
S1:Target image is obtained, and the target image is pre-processed;
S2:Image Segmentation Methods Based on Features is carried out to the processed images of step S1 according to preset partitioning parameters;
S3:The processed images of step S2 are filtered;
S4:The speck of connected domain Detection and Extraction composition characteristic mark is carried out to the processed images of step S3, the speck refers to The connected domain extracted;
S5:Processing is filtered to the speck;
S6:Judge whether filtered speck number meets preset number, if so, S7 is entered step, if it is not, then pressing default Partitioning parameters in rule adjustment step S2, repeat the above steps S2-S6;
S7:Identify the speck contour line;
S8:Whether the speck contour line identified described in judgement matches with preset template contours line;If matching, enters step S9;
S9:Export the characteristic indication identified.
2. according to the method described in claim 1, it is characterized in that, further including before the step S1:
S01:Read in template image;
S02:According to the template image drawing template establishment parameter of the reading, the template parameter includes the number of spots and mould of extraction Web wheel profile.
3. according to the method described in claim 1, it is characterized in that, the step S1 is to obtain target image, to the target Image carries out sub-sampling.
4. according to the method described in claim 3, it is characterized in that, further including after the step S8:
S90:Target image is read again, the contour line of the target image is extracted;
S91:The contour line of the target image of extraction is fitted according to the parameter of the speck contour line of step S7 identifications.
5. according to the method described in claim 1, it is characterized in that, the step S7 includes:
S71:Determine the center of the connected domain;
S72:Using the center of the connected domain as polar pole, with preset angle be spaced in the connected domain edge into Row coordinate acquires;
S73:By collected coordinate data in polar coordinates corner sequential storage to linear array counterclockwise.
6. according to the method described in claim 1, it is characterized in that, the step S8 includes:
S81:Judge the average value of the speck contour line polar coordinates radius detected and the template contours line polar coordinates radius Whether the difference of average value is less than preset distance, if so, entering step S9;If it is not, then entering step S82;
S82:The polar coordinates center of the connected domain is redefined according to comparison result;
S83:The polar data of contour line is redefined according to the polar coordinates center redefined;
S84:The polar data redefined and template polar data are subjected to Difference Calculation and obtain difference value, described in judgement Whether difference value is less than preset value, if so, entering step S9.
7. a kind of vision calibration method, the method includes:
Identify the characteristic indication on workpiece;
It is moved to above the characteristic indication by mechanical hand-motion camera, records mobile physical coordinates;
The corresponding image of the physical coordinates is handled, identifies the coordinate of characteristic indication in the picture;
According to the physical coordinates of record and corresponding image coordinate, the physical coordinates and image coordinate at least record 4 groups, Determine the mapping relations between the image coordinate of characteristic indication and physical coordinates.
8. the method according to the description of claim 7 is characterized in that the method further includes:
The coordinate difference of desired characteristics mark and practical work piece characteristic indication is calculated by Differential positioning algorithm;
The coordinate difference being calculated is converted into the compensating approach value of end effector of robot physical coordinates.
9. a kind of Robot visual location device, which is characterized in that described device includes:
Acquisition module is pre-processed for obtaining target image, and to the target image;
Divide module, for carrying out Image Segmentation Methods Based on Features to pretreated image according to preset partitioning parameters;
Filter module, the image for being crossed to segmentation resume module are filtered;
Detection module, the speck for carrying out connected domain Detection and Extraction composition characteristic mark to the processed image of filter module, The speck refers to the connected domain extracted;
Filtering module, for being filtered processing to the speck;
Judgment module, for judging whether filtered number of spots meets preset number, if it is not, then notifying segmentation module root Partitioning parameters are adjusted according to default rule;
Identification module, for meeting preset number if filtered number of spots if identify the speck contour line;
Matching module, for judging whether the speck contour line identified matches with preset template contours line;
Output module, if the speck contour line for identifying and preset template contours lines matching, export the spy identified Sign mark.
10. device according to claim 9, which is characterized in that described device further includes:
Module is read in, for reading in template image;
Creation module, for the template image drawing template establishment parameter according to the reading, the template parameter includes the bright of extraction Spot number and template contours line.
11. device according to claim 9, which is characterized in that the acquisition module is additionally operable to obtain target image, to institute It states target image and carries out sub-sampling.
12. device according to claim 9, which is characterized in that described device further includes:
Extraction module extracts the contour line of the target image for reading again target image;
Fitting module, the parameter of the speck contour line for being identified according to identification module is to the original target image of the extraction Contour line is fitted.
13. device according to claim 9, which is characterized in that the identification module includes:
Center calculation module, the center for determining the connected domain;
Coordinate acquisition module, for using the center of the connected domain as polar pole, institute to be spaced in preset angle It states connected domain edge and carries out coordinate acquisition;
Memory module is used for collected coordinate data in polar coordinates corner sequential storage to linear array counterclockwise.
14. device according to claim 9, which is characterized in that the matching module includes:
Radius judgment module, average value and the template contours line of the speck contour line polar coordinates radius for judging to detect Whether the difference of the average value of polar coordinates radius is less than preset distance;
Center determining module, if the speck contour line polar coordinates radius average value for detecting is sat with template contours line pole The average difference for marking radius is more than preset distance, then the polar coordinates center of connected domain is redefined according to comparison result;
Contour line index module, the polar coordinates number for redefining contour line according to the polar coordinates center redefined According to;
Computing module obtains difference for the polar data redefined and template polar data to be carried out Difference Calculation Value, judges whether the difference value is less than preset value, if so, notice output module executes the characteristic indication that output identifies.
15. a kind of vision calibration device, which is characterized in that described device includes:
Landmark identification module, for identification characteristic indication on workpiece;
Coordinate record module records mobile object for being moved to above the characteristic indication by mechanical hand-motion camera Manage coordinate;
Coordinate identification module identifies characteristic indication in the picture for handling the corresponding image of the physical coordinates Coordinate;
Relationship determination module, for the physical coordinates and corresponding image coordinate, the physical coordinates and image seat according to record Mark at least records 4 groups, determines the mapping relations between the image coordinate of characteristic indication and physical coordinates.
16. device according to claim 15, which is characterized in that described device further includes:
Coordinate difference computing module, the seat for calculating desired characteristics mark and practical work piece characteristic indication by Differential positioning algorithm Mark is poor;
Adjust module, the compensating approach for the coordinate difference being calculated to be converted into end effector of robot physical coordinates Value.
CN201510900027.0A 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device Active CN105528789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510900027.0A CN105528789B (en) 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510900027.0A CN105528789B (en) 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device

Publications (2)

Publication Number Publication Date
CN105528789A CN105528789A (en) 2016-04-27
CN105528789B true CN105528789B (en) 2018-09-18

Family

ID=55770992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510900027.0A Active CN105528789B (en) 2015-12-08 2015-12-08 Robot visual orientation method and device, vision calibration method and device

Country Status (1)

Country Link
CN (1) CN105528789B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665350A (en) * 2016-07-29 2018-02-06 广州康昕瑞基因健康科技有限公司 Image-recognizing method and system and autofocus control method and system
CN107582001B (en) * 2017-10-20 2020-08-11 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN108460388B (en) * 2018-01-18 2022-03-29 深圳市易成自动驾驶技术有限公司 Method and device for detecting positioning mark and computer readable storage medium
CN108480826A (en) * 2018-03-29 2018-09-04 江苏新时代造船有限公司 A kind of complexity Zhong Zuli robots compression arc MAG welders
CN108453356A (en) * 2018-03-29 2018-08-28 江苏新时代造船有限公司 A kind of complexity Zhong Zuli robots compression arc MAG welding methods
CN110163921B (en) * 2019-02-15 2023-11-14 苏州巨能图像检测技术有限公司 Automatic calibration method based on lamination machine vision system
CN110008955B (en) * 2019-04-01 2020-12-15 中国计量大学 Method for testing character imprinting quality of surface of automobile brake pad
CN110378970B (en) * 2019-07-08 2023-03-10 武汉理工大学 Monocular vision deviation detection method and device for AGV
CN110773842B (en) * 2019-10-21 2022-04-15 大族激光科技产业集团股份有限公司 Welding positioning method and device
CN111091086B (en) * 2019-12-11 2023-04-25 安徽理工大学 Method for improving identification rate of single characteristic information of logistics surface by utilizing machine vision technology
CN111161232B (en) * 2019-12-24 2023-11-14 贵州航天计量测试技术研究所 Component surface positioning method based on image processing
CN110755142B (en) * 2019-12-30 2020-03-17 成都真实维度科技有限公司 Control system and method for realizing space multi-point positioning by adopting three-dimensional laser positioning
CN111390882B (en) * 2020-06-02 2020-08-18 季华实验室 Robot teaching control method, device and system and electronic equipment
CN111721507B (en) * 2020-06-30 2022-08-19 东莞市聚明电子科技有限公司 Intelligent detection method and device for keyboard backlight module based on polar coordinate identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6137893A (en) * 1996-10-07 2000-10-24 Cognex Corporation Machine vision calibration targets and methods of determining their location and orientation in an image
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
CN103292695A (en) * 2013-05-10 2013-09-11 河北科技大学 Monocular stereoscopic vision measuring method
CN104019745A (en) * 2014-06-18 2014-09-03 福州大学 Method for measuring size of free plane based on monocular vision indirect calibration method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6137893A (en) * 1996-10-07 2000-10-24 Cognex Corporation Machine vision calibration targets and methods of determining their location and orientation in an image
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
CN103292695A (en) * 2013-05-10 2013-09-11 河北科技大学 Monocular stereoscopic vision measuring method
CN104019745A (en) * 2014-06-18 2014-09-03 福州大学 Method for measuring size of free plane based on monocular vision indirect calibration method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种基于图象内部信息的轮廓匹配和切片对齐新方法;洪泉 等;《中国图象图形学报》;20010225;第6卷(第2期);第1节 *
基于轮廓矢量化的形状匹配快速算法;邝泳聪 等;《计算机应用研究》;20140415;第31卷(第4期);第1.2节 *
工件自动视觉定位识别***研究;王彦 等;《计算机工程与应用》;20090311;第45卷(第8期);第4.2-4.4节、图13-16 *
视觉引导抓取机械手工作平面定位误差与修正;陈思伟;《中国优秀硕士学位论文全文数据库信息科技辑》;20150515(第05期);第I138-854页正文第41页第3段-第42页第2段、第44页第3段、第47页第2段 *

Also Published As

Publication number Publication date
CN105528789A (en) 2016-04-27

Similar Documents

Publication Publication Date Title
CN105528789B (en) Robot visual orientation method and device, vision calibration method and device
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN104680519B (en) Seven-piece puzzle recognition methods based on profile and color
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN106824806B (en) The detection method of low module plastic gear based on machine vision
CN107133983B (en) Bundled round steel end face binocular vision system and space orientation and method of counting
CN110659636A (en) Pointer instrument reading identification method based on deep learning
CN112308916B (en) Target pose recognition method based on image target
CN107993224B (en) Object detection and positioning method based on circular marker
CN105865329A (en) Vision-based acquisition system for end surface center coordinates of bundles of round steel and acquisition method thereof
CN109815822B (en) Patrol diagram part target identification method based on generalized Hough transformation
CN112907506B (en) Water gauge color information-based variable-length water gauge water level detection method, device and storage medium
CN109447062A (en) Pointer-type gauges recognition methods based on crusing robot
CN115619787B (en) UV glue defect detection method, system, equipment and medium
CN107527368A (en) Three-dimensional attitude localization method and device based on Quick Response Code
CN109584258A (en) Meadow Boundary Recognition method and the intelligent mowing-apparatus for applying it
CN109711414A (en) Equipment indicating lamp color identification method and system based on camera image acquisition
CN107891012B (en) Pearl size and circularity sorting device based on equivalent algorithm
CN112634269A (en) Rail vehicle body detection method
CN109945842B (en) Method for detecting label missing and analyzing labeling error of end face of bundled round steel
CN103533332A (en) Image processing method for converting 2D video into 3D video
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN116563391A (en) Automatic laser structure calibration method based on machine vision
CN117152727A (en) Automatic reading method of pointer instrument for inspection robot
CN114842335B (en) Grooving target identification method and system for construction robot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180706

Address after: 518000 Guangdong Shenzhen Baoan District Xixiang Street Sanwei community science and technology park business building 12 floor A1201-A1202

Applicant after: SHENZHEN HENGKETONG ROBOT CO., LTD.

Address before: 518000 A1203-A1204 12, Suo business building, 7 Air Road, Baoan District Xixiang street, Shenzhen, Guangdong.

Applicant before: SHENZHEN HENGKETONG MULTIDIMENSIONAL VISION CO., LTD.

GR01 Patent grant
GR01 Patent grant