CN104123542B - A kind of devices and methods therefor of hub workpiece positioning - Google Patents

A kind of devices and methods therefor of hub workpiece positioning Download PDF

Info

Publication number
CN104123542B
CN104123542B CN201410349103.9A CN201410349103A CN104123542B CN 104123542 B CN104123542 B CN 104123542B CN 201410349103 A CN201410349103 A CN 201410349103A CN 104123542 B CN104123542 B CN 104123542B
Authority
CN
China
Prior art keywords
hub
image
point
points
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410349103.9A
Other languages
Chinese (zh)
Other versions
CN104123542A (en
Inventor
陈喆
殷福亮
李丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201410349103.9A priority Critical patent/CN104123542B/en
Publication of CN104123542A publication Critical patent/CN104123542A/en
Application granted granted Critical
Publication of CN104123542B publication Critical patent/CN104123542B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of devices and methods therefor of hub workpiece positioning, described device includes image capture module, wheel hub Template Information extraction module, hubless feature to be detected point extraction module, Feature Points Matching module and wheel hub locating module;Four points on SIFT feature, the position in the center of circle and valve and wheel hub outward flange circumference that wheel hub Template Information extraction module is used to extract on wheel hub template image.The problems such as present invention is in view of the illumination effect and translation, rotation, dimensional variation run into during wheel hub images match, matched using Scale invariant features transform characteristic point matching method template image and altimetric image to be checked it is hollow between corresponding point it is right, then these spatial correspondences to hub area image in judge templet image and altimetric image to be checked are passed through, known calibration point in template image is finally calculated into the corresponding point of hub area in altimetric image to be checked by the spatial correspondence of the two, so as to reach the purpose of wheel hub positioning.

Description

Device and method for positioning hub workpiece
Technical Field
The invention relates to a workpiece positioning technology, in particular to a device and a method for positioning a hub workpiece.
Background
Workpiece positioning is a common operating requirement in automotive automatic assembly lines. The industrial robot with the computer vision function is applied to automatic assembly of automobiles, so that the interference of human factors can be effectively reduced, the production efficiency and the product quality are obviously improved, and the production cost is reduced. When industrial robot carries out the automated processing of automobile wheel hub work piece, will use computer vision technique to come the actual work piece image that the analysis industry camera gathered, discern wheel hub in the follow image, calculate geometric position information, determine the gesture and the movement track of snatching of robot in view of the above, real-time control industrial robot snatchs and transport wheel hub.
Generally, a hub workpiece is a cast workpiece, a plurality of casting lines are left beside a rough hub, and the surface of the hub is rough; in addition, under actual operating environment, there are other interferents around the wheel hub often, and the wheel hub has translation and rotation, and some shootings of wheel hub work piece are incomplete, place the position of wheel hub, and its image background is more complicated etc.. In this case, the wheel hub workpiece image is complex, which makes positioning the wheel hub and its air nozzle difficult.
The prior art relating to the present invention will now be described as follows:
1. technical scheme of prior art I
In the document 'research of an automobile hub online identification system', mechanical design and manufacture, 2007(10): 164-. The method comprises the following basic steps: image acquisition, image preprocessing, feature extraction and recognition classification. The hub positioning and classifying method comprises the following key steps of extracting five features of a hub image: whether the center of the hub is provided with a hole or not; the hub diameter; the number of holes in the peripheral area of the hub; the area of the whole hub; and counting the most gray values of the pixel points in the same gray level of the hub area in the gray image.
In the prior art, a hub contour circle is fitted through image region segmentation, Rober operator edge detection and a least square method. However, in an actual processing environment, when a background on which a hub is placed is complex or a hub image background is similar to a hub target color, and an edge detection result is not good, missing judgment and wrong judgment of image characteristics may be caused. In addition, when the part of the hub workpiece is not completely shot, the method cannot realize the hub shape and position calculation.
2. Technical scheme of prior art II
Leyingying, xuxinmin and Wuxiao wave propose a high-precision shape and position parameter detection method based on area array CCD imaging and computer image processing technology in the document ' wheel hub shape and position parameter detection method based on area array CCD ' science and technology bulletin, 2009,25(2):196-201 '. The method comprises the following basic steps: shooting a hub image with a calibration template, carrying out image edge detection through image gray level conversion and region segmentation, and carrying out geometric distortion correction on the hub boundary by combining an optical distortion correction model; then, a sub-pixel interpolation algorithm is adopted to enable the edge detection result to be more accurate; and finally, fitting the shape and position parameters of the hub according to the position of the mounting hole.
The positioning accuracy of the second prior art depends on image region segmentation and edge detection results, particularly shape detail segmentation results inside a hub region. However, in an actual processing environment, when the background for placing the hub is complex or the color of the hub image background is similar to the target color of the hub, the detailed shape in the hub region cannot be well highlighted by using an image segmentation algorithm, so that the subsequent positioning step cannot be performed; in addition, when the part of the hub workpiece is not completely shot, the method cannot realize the hub shape and position calculation.
3. Technical scheme of prior art III
Hu super, trekkan and fur jun, etc. in the patent of "automatic identification device and method of the wheel hub", China, 103090790.A [ P ] 2013,05,08 ", propose a device and method for identifying the center hole of the wheel hub, the assembly plane of the wheel hub and the offset parameter from the assembly plane of the wheel hub to the peripheral plane of the bottom of the wheel hub. The method comprises the steps of automatically acquiring offset parameters from a hub assembling surface to a peripheral plane at the bottom of a hub in advance through a non-contact distance measuring instrument, fully covering various technical parameters of the hub, creating a hub information database, then respectively acquiring images above and below the hub by adopting two image acquisition devices, wherein the image above the hub is a hub top view, the appearance parameters of the hub are acquired through the hub top view, and the appearance parameters of a hub center hole and the hub assembling surface, such as the size, the position and the appearance of an assembling hole of the assembling surface, are acquired through the image below the hub, namely a bottom view of the hub.
In the third prior art, a large amount of information of the hub image needs to be stored in advance, and in the actual identification process, the front images of the upper part and the lower part of the hub need to be acquired, so that the device and the identification process are complex. In addition, the invention aims to locate a specific part of the hub, and the relative position of the camera and the hub is not fixed, so that the method is not suitable for the scene.
4. Technical scheme of prior art four
Huangqian, Wuyuan and Tangdake are disclosed in 'a detection system for identifying the model of a wheel hub and a detection method thereof, China 103425969. AP. 2013' proposes an automatic identification system for the model of the wheel hub, which comprises an upper computer and a CCD image sensor, wherein the upper computer is sequentially connected with the CCD image sensor. The patent also provides a method implemented by the above system, comprising the steps of: initializing and setting; acquiring a hub-free image of a hub model identification area; creating a hub model database record; the hub model is identified and determined. According to the method, the automatic model identification of the hub entering the hub identification area can be realized in the working process of the system according to the hub model database established in advance.
In the fourth prior art, a hub image library needs to be stored in advance, and when the part of a hub workpiece is not completely shot, the hub shape and position can not be calculated by the method.
In conclusion, the existing wheel hub workpiece positioning technology has the following problems: (1) in the process of matching the hub, when the image is influenced by illumination and has the problems of translation, rotation, scale change and the like, the positioning deviation of a hub workpiece is large; (2) under the conditions of the change of the wheel hub visual angle and partial shielding, the wheel hub positioning is difficult to carry out.
The terms to be used in the present invention are abbreviated as follows:
Scale-Invariant Feature Transform, Scale Invariant Feature Transform;
DoG, Difference of Gaussian;
best Bin First, optimal node First;
RANSAC, Random Sample Consensus.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention is to design a device and a method for positioning a hub workpiece, which achieve the following two objectives:
(1) in the process of matching the wheel hub, the deviation of the wheel hub workpiece positioning is reduced;
(2) under the condition that the visual angle of the hub is changed and the hub is partially shielded, the hub is easy to position.
In order to achieve the purpose, the technical scheme of the invention is as follows: a device for positioning a wheel hub workpiece comprises an image acquisition module, a wheel hub template information extraction module, a wheel hub characteristic point extraction module to be detected, a characteristic point matching module and a wheel hub positioning module; the image acquisition module is used for acquiring a gray level image of the hub workpiece; the hub template information extraction module is used for extracting SIFT feature points, the positions of a circle center and an air tap on a hub template image and four points on the periphery of the outer edge of the hub; the hub feature point extraction module to be detected is used for extracting SIFT feature point information on a hub image to be detected; the characteristic point matching module is used for searching a characteristic point pair matched with the hub image to be detected and the hub template image and calculating a spatial mapping relation between the hub to be detected and the template hub; the hub positioning module is used for positioning the circle center of the corresponding hub in the image to be detected, the air tap and the positions of four points on the circumference of the outer edge of the hub, and calculating the radius length of the hub in the image to be detected;
the output end of the image acquisition module is respectively connected with the hub template information extraction module and the hub characteristic point extraction module to be detected, the input end of the characteristic point matching module is respectively connected with the hub template information extraction module and the hub characteristic point extraction module to be detected, and the output end of the characteristic point matching module is connected with the hub positioning module.
A method for positioning a positioning device of a hub workpiece comprises the following steps:
A. off-line processing
In the off-line processing stage, collecting a wheel hub workpiece image, extracting and storing SIFT feature point information of the wheel hub image, and marking the position of a circle center and an air nozzle on a template image in advance; the method specifically comprises the following steps:
a1, collecting the wheel hub workpiece image
The method comprises the steps that an image acquisition module acquires a gray level image of a hub workpiece, and when the image acquisition module is used, an ideal hub template image needs to be acquired in an environment with good illumination condition and low noise; the background color of the template image is required to be uniform, and the image only has a hub and no other interferents;
a2, extracting hub template information
Extracting SIFT feature points on the hub template image by a hub template information extraction module, calibrating the circle center and the air tap position of the hub template, and measuring the radius of the hub template; the method comprises the following specific steps:
a21, analyzing the input hub workpiece template image, searching pixel points which meet SIFT feature point characteristics in a hub workpiece region, counting and storing SIFT feature point description information, namely obtaining SIFT feature point template information of the hub according to the steps 1) to 5):
a211, constructing an image pyramid T
Defining an input image as f (x, y), and performing I-time down-sampling on the f (x, y) to obtain an image pyramid T of an (I +1) layer, wherein I is log2[min(M,N)]-3, M and N are the number of rows and columns, respectively, of f (x, y). The down-sampling is to take the average value of four adjacent pixels as the pixel after down-sampling.
Defining the image of the 0 th layer in the image pyramid model T as T0(x, y), i.e. the original image f (x, y); the image of the ith layer is defined as Ti(x, y), i.e. the original image f (x, y) is down-sampled I times, I is 0, 1, 2.
A212, constructing a Gaussian pyramid L
Using a Gaussian convolution kernel G (x, y, sigma) to Ti(x, y) performing convolution, and continuously changing the scale space factor sigma to obtain a scale space Li
Li(x,y,σ)=G(x,y,σ)*Ti(x,y) (1)
Wherein, the symbol' represents the convolution operator,σ is a scale space factor, I ═ 0, 1, 2.
The same operation is performed for (I +1) images in T, resulting in L.
A213 constructing a DoG pyramid D
Get LiEvery two adjacent images are subjected to difference to obtain a DoG space DiI.e. by
Di(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*Ti(x,y)=Li(x,y,kσ)-Li(x,y,σ) (2)
Where, the symbol' denotes a convolution operator, k is a constant of two adjacent scale space multiples, I is 0, 1, 2.
The same operation is performed on the (I +1) group of images in L to obtain D.
A214, detecting the spatial local extreme point in D
Using Taylor expansion of the DoG function, when solving the condition that the derivative of the function is zero
Where X ═ (X, y, σ)T
Let the derivative of D (X) be zero, obtain the extreme point of sub-pixel level precisionNamely, it is
A215, screening unstable extreme points to obtain an SIFT feature point set
Firstly, the points with lower contrast in the image are removed, namely, the method satisfiesExtreme point ofThe edge extremum points are then removed using the Hessian matrix.
The second derivative of the image of a certain scale in the DoG pyramid in the x direction is defined as DxxThen the Hessian matrix is expressed as:
two eigenvalues of H are defined as λ1And λ2Wherein λ is1≥λ2And lambda12R, here λ1And λ2And respectively corresponding to the main curvature values of the image pair in the x direction and the y direction, and judging that the extreme point is positioned at the edge position of the DoG curve when r is greater than a threshold value 10.
Tr (H) is defined as the trace of H, det (H) is the determinant of H, then
Namely, Tr (H) and Det (H) are calculated to avoid direct feature value calculation, thereby reducing the calculation amount.
A216, calculating SIFT feature point descriptors
Dividing an image area into a plurality of blocks by taking the image area with any size around the key point as a statistical range; the gradient histogram of each point in each block is counted, and a vector representing the image information of the region is calculated.
The modulus defining the gradient is m (x, y) and the direction is θ (x, y), then
Firstly, calculating an image region required by a descriptor, dividing a neighborhood near a feature point into 4 multiplied by 4 sub-regions, wherein the size of each sub-region is 3 sigma, and sigma is a scale space factor; then, the histogram of gradient direction of each sub-region is counted: taking the direction of the characteristic point as a reference direction, then calculating the angle of the gradient direction of each pixel point in each sub-region relative to the reference direction, projecting the angle to 8 directions at intervals of pi/4 in a 0-2 pi interval, counting the accumulation of gradient values in each direction, and generating an 8-dimensional vector descriptor after normalization operation; finally, 8-dimensional vectors of each sub-region are collected to form a feature point descriptor with dimensions of 4 multiplied by 8 and 126.
A22, calibrating a hub template information extraction module and storing position information of six pixel points in a hub template image by taking a pixel as a unit: center O (x) of wheel hub workpiece0,y0) Air tap center O of wheel hub workpiecegas(xgas,ygas) And four points O on the outer edge circumference of the hub1(x1,y1),O2(x2,y2),O3(x3,y3),O4(x4,y4)。
B. On-line processing
In the online processing stage, SIFT feature points on an image to be detected are extracted; then searching characteristic points matched with the hub template by using a Best-Bin-First search algorithm; then eliminating mismatching points by using an RANSAC algorithm, and calculating a spatial mapping relation between a hub and a template image in an image to be detected; finally, calculating the circle center of the wheel hub and the position of the air tap in the image to be detected according to the mark points of the template image; the method specifically comprises the following steps:
b1, extracting SIFT feature points on the image to be detected
The hub feature point extraction module to be detected obtains SIFT feature points on the image to be detected according to the step A21, namely, analyzes the input hub image to be detected, searches pixel points which meet the SIFT feature point characteristics in the image, and counts and stores the pixel points
B2, matching feature points
And searching characteristic points matched with the hub image to be detected and the hub template image by a characteristic point matching module, eliminating mismatching points, and calculating the spatial mapping relation between the hub to be detected and the template hub. The method comprises the following specific steps:
b21, carrying out initial matching on the reference image and the image to be matched by using a nearest neighbor/next nearest neighbor algorithm
Searching a characteristic point p (a characteristic vector is v) to be matched by using a BBF algorithmi) Nearest neighbor feature point p with Euclidean distancemin(feature vector is v)min) And sub-adjacent feature point pmin2(feature vector v)min2) Then, the point pairs satisfying the following conditions are matched feature points:
wherein Dist (v)i,vmin) Denotes viAnd vminMahalanobis distance between them, Dist (v)i,vmin2) Denotes viAnd vmin2The mahalanobis distance therebetween, i.e.
Where the superscript T denotes the matrix transpose symbol.
And B22, eliminating mismatching points by using a RANSAC algorithm, and calculating the spatial corresponding relation between the target area and the template image.
If the point sets a and B are initial matching point sets obtained on the template image and the detection image respectively, the RANSAC algorithm specifically comprises the following steps:
b221, randomly selecting 4 pairs of matching point pairs in the point pair sets A and B, and calculating a projection transformation matrix H of the four pairs of point pairs:
for a point p (x, y) in the image, the point is transformed by the matrix H to a point p ' (x ', y '), i.e.
Wherein,
that is, H can be obtained by matching the pair of points p (x, y) and p ' (x ', y '). A projective transformation matrix can be calculated for every 4 pairs of matching points.
B222, performing spatial transformation on all characteristic points in the point set A by using the projection transformation matrix H obtained in the step B221 to obtain a point set B';
calculating the coordinate errors of all corresponding points in the point sets B and B ', namely e | | | B-B' | |; setting an error threshold value sigma, if e is less than sigma, considering the point pair as an inner point pair, otherwise, considering the point pair as an outer point pair;
b223, repeating the steps B221 and B222, finding out one transformation with the largest number of interior point pairs, taking the interior point pair set obtained by the transformation as a new point set A and B, and performing a new iteration;
b224 iteration end judgment: when the number of the inner point pairs obtained by iteration is consistent with the number of the point pairs in the point sets A and B before the iteration, the iteration is terminated;
b225, iteration result: and finally, the iteration A and B are the matching point sets after the mismatching characteristic point pairs are removed, and the corresponding projection transformation matrix H represents the required space transformation relation between the original image and the image to be detected.
B3, positioning the circle center of the hub and the position of the air tap in the image to be detected
And positioning the circle center of the wheel hub and the position of the air tap in the image to be detected by the wheel hub positioning module, and calculating the radius length of the wheel hub in the image to be detected. The method comprises the following specific steps:
b31, calculating the image to be detected according to the spatial transformation matrix H obtained in the step B222Six pixel points corresponding to the calibration points: circle center O ' (x ') of hub workpiece '0,y′0) Air nozzle center O 'of wheel hub workpiece'gas(x′gas,y′gas) And four points O 'on the outer edge circumference of the hub'1(x′1,y′1),O′2(x′2,y′2),O′3(x′3,y′3),O′4(x′4,y′4)。
Coordinate O ' (x ') of center position of hub '0,y′0) For example, the following steps are carried out:
b32, calculating the radius R' of the hub workpiece in the image to be detected:
wherein
Compared with the prior art, the invention has the following beneficial effects:
1. in order to effectively position a hub workpiece, the invention considers the problems of illumination influence, translation, rotation, scale change and the like in the process of matching the hub image, adopts a Scale Invariant Feature Transform (SIFT) feature point matching method to match out point pairs corresponding to spaces in the template image and the image to be detected, then judges the space corresponding relation between the template image and the hub region image in the image to be detected through the point pairs, and finally calculates the points corresponding to the hub region in the image to be detected through the known calibration points in the template image according to the space corresponding relation between the known calibration points and the hub region image, thereby achieving the purpose of positioning the hub. The document Lowe DG, diagnostic image features from scale-innovative keypoints, 2004,60(2):91-110 proves that the feature points meeting the SIFT characteristics on the image can keep good SIFT characteristics when the image is subjected to illumination change, translation, rotation and scale change, so the invention has better robustness on ambient light, view angle change and partial shielding, can position the hub workpiece in different interference environments, and has good positioning effect.
2. Before actual positioning, the characteristic point information of the hub template image is obtained in an off-line processing mode, and the circle center and the air tap of the hub template are calibrated in advance, so that the calculated amount of the hub in the actual positioning process is reduced;
3. according to the invention, an SIFT algorithm is adopted as a feature point matching method, so that the problems of illumination, translation, rotation and scale transformation in the image matching process can be solved, and meanwhile, the robustness on background noise, visual angle change and partial shielding is better.
4. The invention adopts a random RANSAC method to eliminate the mismatching point pairs, thereby improving the matching precision.
Drawings
The invention is shown in the attached figure 7, wherein:
FIG. 1 is a flow chart of a hub workpiece positioning method based on SIFT features.
Fig. 2 is a schematic view of a hub workpiece positioning device based on SIFT features.
FIG. 3 is a schematic view of hub template marker points.
Fig. 4 shows the positioning result in the case of a rotational translation of the hub.
FIG. 5 shows the positioning result in the case of a disturbance around the hub.
Fig. 6 shows the positioning result when the background of the hub image is not uniform.
Fig. 7 shows the result of the positioning in the absence of the hub portion.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The components of a hub workpiece positioning device are shown in fig. 2, and the specific method flow is shown in fig. 1.
Objective and subjective tests were performed to verify the effectiveness of the present invention.
1. Subjective Performance test (visual effect)
When the camera gathers the wheel hub image, different noise interference situations can appear. In order to verify the effectiveness of the method, a plurality of images under different interference conditions are collected for experiment.
In the experiment, the size of the template image is 690 × 691 pixels, and the size of the detection image is 1280 × 960 pixels, wherein the conditions of the hub template and the mark points thereof are shown in fig. 3. In fig. 3, the center of the hub template and the center of the air tap are marked by cross marks, and the periphery of the outer edge of the hub template is marked by a solid circle. In the process of positioning the hub, the relevant coordinate values in the template image need to be selected firstly, in the experiment, the coordinate of the center of the hub in the template image is (353,351), the coordinate of the center point of the air tap is (127,246), and the radius of the hub is 339 pixels.
In consideration of space limitation, one image is selected under different interference conditions of the hub, the positioning result is displayed in fig. 4-7, the actual hub center point and the air tap center position are marked by star marks, the detected hub outer edge circumference is marked by a solid line circle, the detected hub center point and the air tap center position are marked by a cross, and the detected hub outer edge circumference is marked by a dotted line circle. In addition, the circle center and the air nozzle part are enlarged on the right side of each hub drawing so as to more clearly display the detection result.
2. Objective performance criteria
In order to objectively evaluate the positioning accuracy, the invention counts the absolute difference value between the radius value, the circle center and the coordinate value of the center of the air tap and the actual value, namely the absolute value of the deviation between the positioning result and the actual value, obtained under each interference condition, and calculates the average absolute difference value. Under different interference conditions, the average absolute difference between the positioning results of the center of the wheel hub, the center of the air tap and the radius and the actual value is shown in table 1, and the data units in the table are pixels.
TABLE 1 mean absolute difference between wheel hub positioning result and actual value of the method of the invention
As can be seen from table 1, although the hub region in the image has rotational-translational transformation, uneven background or interferent, even if the hub has partial deletion, the hub location based on the SIFT feature point matching can obtain a good location result, and is hardly affected by the interference factor.
3. Aiming at the technical scheme of the invention, the following alternative schemes can also achieve the aim of the invention
(1) For a more ideal working environment, such as the conditions of sufficient illumination, no shielding and the like, other feature point matching algorithms with poor robustness and small calculation amount, such as SURF algorithm, can be adopted for matching;
(2) the SIFT algorithm is a method based on feature point matching, and for a hub which is a regular rotational symmetry workpiece, line features can be used for matching instead of point features.
(3) Based on the method, under the condition of better illumination conditions, the hub area can be segmented in advance through image area segmentation, and SIFT feature point matching is carried out, so that the calculation amount of the positioning method can be effectively reduced.

Claims (2)

1. A device for locating a hub workpiece is characterized in that: the hub positioning device comprises an image acquisition module, a hub template information extraction module, a to-be-detected hub characteristic point extraction module, a characteristic point matching module and a hub positioning module; the image acquisition module is used for acquiring a gray level image of the hub workpiece; the hub template information extraction module is used for extracting SIFT feature points, the positions of a circle center and an air tap on a hub template image and four points on the periphery of the outer edge of the hub; the hub feature point extraction module to be detected is used for extracting SIFT feature point information on a hub image to be detected; the characteristic point matching module is used for searching a characteristic point pair matched with the hub image to be detected and the hub template image and calculating a spatial mapping relation between the hub to be detected and the template hub; the hub positioning module is used for positioning the circle center of the corresponding hub in the image to be detected, the air tap and the positions of four points on the circumference of the outer edge of the hub, and calculating the radius length of the hub in the image to be detected;
the output end of the image acquisition module is respectively connected with the hub template information extraction module and the hub characteristic point extraction module to be detected, the input end of the characteristic point matching module is respectively connected with the hub template information extraction module and the hub characteristic point extraction module to be detected, and the output end of the characteristic point matching module is connected with the hub positioning module.
2. A positioning method of a positioning device of a hub workpiece is characterized in that: the method comprises the following steps:
A. off-line processing
In the off-line processing stage, collecting a wheel hub workpiece image, extracting and storing SIFT feature point information of the wheel hub image, and marking the position of a circle center and an air nozzle on a template image in advance; the method specifically comprises the following steps:
a1, collecting the wheel hub workpiece image
The method comprises the steps that an image acquisition module acquires a gray level image of a hub workpiece, and when the image acquisition module is used, an ideal hub template image needs to be acquired in an environment with good illumination condition and low noise; the background color of the template image is required to be uniform, and the image only has a hub and no other interferents;
a2, extracting hub template information
Extracting SIFT feature points on the hub template image by a hub template information extraction module, calibrating the circle center and the air tap position of the hub template, and measuring the radius of the hub template; the method comprises the following specific steps:
a21, analyzing the input hub workpiece template image, searching pixel points which meet SIFT feature point characteristics in a hub workpiece area, counting and storing SIFT feature point description information, namely obtaining SIFT feature point template information of the hub according to the steps A211-A216:
a211, constructing an image pyramid T
Defining an input image as f (x, y), and performing I-time down-sampling on the f (x, y) to obtain an image pyramid T of an (I +1) layer, wherein I is log2[min(M,N)]-3, M and N are the number of rows and columns, respectively, of f (x, y); the down-sampling is to take the average value of four adjacent pixels as the pixel after down-sampling;
defining the image of the 0 th layer in the image pyramid model T as T0(x, y), i.e. the original image f (x, y); the image of the ith layer is defined as Ti(x, y), i.e. an image obtained by performing I-time down-sampling on the original image f (x, y), wherein I is 0, 1, 2., I;
a212, constructing a Gaussian pyramid L
Using a Gaussian convolution kernel G (x, y, sigma) to Ti(x, y) performing convolution, and continuously changing the scale space factor sigma to obtain a scale space Li
Li(x,y,σ)=G(x,y,σ)*Ti(x,y) (1)
Wherein, the symbol' represents the convolution operator,σ is a scale space factor, I ═ 0, 1,2,. and I;
performing the same operation on (I +1) images in the T to obtain L;
a213 constructing a DoG pyramid D
Get LiEvery two adjacent images are subjected to difference to obtain a DoG space DiI.e. by
Di(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*Ti(x,y)=Li(x,y,kσ)-Li(x,y,σ) (2)
Wherein, the symbol' denotes a convolution operator, k is a constant of two adjacent scale space multiples, I is 0, 1, 2.
Performing the same operation on the (I +1) group of images in the L to obtain D;
a214, detecting the spatial local extreme point in D
Using Taylor expansion of the DoG function, when solving the condition that the derivative of the function is zero
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ 2 D T ∂ 2 X X - - - ( 3 )
Where X ═ (X, y, σ)T
Let the derivative of D (X) be zero, obtain the extreme point of sub-pixel level precisionNamely, it is
X ^ = - ∂ 2 D - 1 ∂ X 2 ∂ D ∂ X - - - ( 4 )
A215, screening unstable extreme points to obtain an SIFT feature point set
Firstly, the points with lower contrast in the image are removed, namely, the method satisfiesExtreme point ofThen removing edge extreme points by using a Hessian matrix;
the second derivative of the image of a certain scale in the DoG pyramid in the x direction is defined as DxxThen the Hessian matrix is expressed as:
H = D x x D x y D x y D y y - - - ( 5 )
two eigenvalues of H are defined as λ1And λ2Wherein λ is1≥λ2And lambda12R, here λ1And λ2Respectively corresponding to the main curvature values of the image pair in the x direction and the y direction, and judging that the extreme point is positioned at the edge position of the DoG curve when r is greater than a threshold value 10;
tr (H) is defined as the trace of H, det (H) is the determinant of H, then
T r ( H ) 2 D e t ( H ) = ( λ 1 + λ 2 ) 2 λ 1 λ 2 = ( rλ 2 + λ 2 ) 2 rλ 2 λ 2 = ( r + 1 ) 2 r > ( 10 + 1 ) 2 10 - - - ( 6 )
The direct characteristic value is avoided by calculating Tr (H) and Det (H), thereby reducing the calculation amount;
a216, calculating SIFT feature point descriptors
Dividing an image area into a plurality of blocks by taking the image area with any size around the key point as a statistical range; counting the gradient histogram of each point in each block, and calculating a vector representing the image information of the area;
the modulus defining the gradient is m (x, y) and the direction is θ (x, y), then
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 7 )
Firstly, calculating an image region required by a descriptor, dividing a neighborhood near a feature point into 4 multiplied by 4 sub-regions, wherein the size of each sub-region is 3 sigma, and sigma is a scale space factor; then, the histogram of gradient direction of each sub-region is counted: taking the direction of the characteristic point as a reference direction, then calculating the angle of the gradient direction of each pixel point in each sub-region relative to the reference direction, projecting the angle to 8 directions at intervals of pi/4 in a 0-2 pi interval, counting the accumulation of gradient values in each direction, and generating an 8-dimensional vector descriptor after normalization operation; finally, 8-dimensional vectors of each sub-region are collected to form a feature point descriptor with dimensions of 4 multiplied by 8 which are 126;
a22, calibrating a hub template information extraction module and storing position information of six pixel points in a hub template image by taking a pixel as a unit: center O (x) of wheel hub workpiece0,y0) Air tap center O of wheel hub workpiecegas(xgas,ygas) And four points O on the outer edge circumference of the hub1(x1,y1),O2(x2,y2),O3(x3,y3),O4(x4,y4);
B. On-line processing
In the online processing stage, SIFT feature points on an image to be detected are extracted; then searching characteristic points matched with the hub template by using a Best-Bin-First search algorithm; then eliminating mismatching points by using an RANSAC algorithm, and calculating a spatial mapping relation between a hub and a template image in an image to be detected; finally, calculating the circle center of the wheel hub and the position of the air tap in the image to be detected according to the mark points of the template image; the method specifically comprises the following steps:
b1, extracting SIFT feature points on the image to be detected
The hub feature point extraction module to be detected obtains SIFT feature points on the image to be detected according to the step A21, namely, analyzes the input hub image to be detected, searches pixel points which meet the SIFT feature point characteristics in the image, and counts and stores the pixel points
B2, matching feature points
Searching characteristic points matched with the hub template image by the characteristic point matching module, eliminating mismatching points, and calculating a spatial mapping relation between the hub to be detected and the template hub; the method comprises the following specific steps:
b21, carrying out initial matching on the reference image and the image to be matched by using a nearest neighbor/next nearest neighbor algorithm
Searching the nearest characteristic point p closest to the characteristic point p to be matched in Euclidean distance by using BBF algorithmminAnd sub-adjacent feature point pmin2Wherein the feature vector of the feature point p to be matched is viNearest feature point pminHas a feature vector of vminNext adjacent feature point pmin2Has a feature vector of vmin2Then, the point pairs satisfying the following conditions are matched feature points:
D i s t ( v i , v m i n ) D i s t ( v i , v min 2 ) < 0.8 - - - ( 8 )
wherein Dist (v)i,vmin) Denotes viAnd vminMahalanobis distance between them, Dist (v)i,vmin2) Denotes viAnd vmin2The mahalanobis distance therebetween, i.e.
D i s t ( v i , v m i n 2 ) = ( v i - v m i n 2 ) ( v i - v m i n 2 ) T - - - ( 9 )
D i s t ( v i , v m i n ) = ( v i - v m i n ) ( v i - v m i n ) T - - - ( 10 )
Wherein the superscript T represents a matrix transpose symbol;
b22, eliminating mismatching points by using a RANSAC algorithm, and calculating a spatial corresponding relation between a target area and a template image;
if the point sets a and B are initial matching point sets obtained on the template image and the detection image respectively, the RANSAC algorithm specifically comprises the following steps:
b221, randomly selecting 4 pairs of matching point pairs in the point pair sets A and B, and calculating a projection transformation matrix H of the four pairs of point pairs:
for a point p (x, y) in the image, the point is transformed by the matrix H to a point p ' (x ', y '), i.e.
x &prime; y &prime; 1 = H x y 1 - - - ( 11 )
Wherein,
that is, H can be obtained by matching the pair of points p (x, y) and p ' (x ', y '); calculating a projective transformation matrix for every 4 pairs of matching points;
b222, performing spatial transformation on all characteristic points in the point set A by using the projection transformation matrix H obtained in the step B221 to obtain a point set B';
calculating the coordinate errors of all corresponding points in the point sets B and B ', namely e | | | B-B' | |; setting an error threshold value sigma, if e is less than sigma, considering the point pair as an inner point pair, otherwise, considering the point pair as an outer point pair;
b223, repeating the steps B221 and B222, finding out one transformation with the largest number of interior point pairs, taking the interior point pair set obtained by the transformation as a new point set A and B, and performing a new iteration;
b224 iteration end judgment: when the number of the inner point pairs obtained by iteration is consistent with the number of the point pairs in the point sets A and B before the iteration, the iteration is terminated;
b225, iteration result: finally, A and B of the iteration are the matching point sets after the mismatching characteristic point pairs are removed, and the corresponding projection transformation matrix H represents the required space transformation relation between the original image and the image to be detected;
b3, positioning the circle center of the hub and the position of the air tap in the image to be detected
Positioning the circle center of the wheel hub and the position of the air tap in the image to be detected by the wheel hub positioning module, and calculating the radius length of the wheel hub in the image to be detected; the method comprises the following specific steps:
b31, calculating six pixel points corresponding to the calibration points in the image to be detected according to the spatial transformation matrix H obtained in the step B221: circle center O ' (x ') of hub workpiece '0,y′0) Air nozzle center O 'of wheel hub workpiece'gas(x′gas,y′gas) And four points O on the outer edge circumference of the hub1′(x1′,y1′),O2′(x′2,y′2),O3′(x3′,y3′),O4′(x′4,y′4);
Coordinate O ' (x ') of center position of hub '0,y′0) For example, the following steps are carried out:
x 0 &prime; = h 00 x 0 + h 01 y 0 + h 02 h 20 + h 21 + 1 y 0 &prime; = h 10 x 0 + h 11 y 0 + h 12 h 20 + h 21 + 1 - - - ( 12 )
b32, calculating the radius R' of the hub workpiece in the image to be detected:
R &prime; = d 1 + d 2 + d 3 + d 4 4 - - - ( 13 )
whereini=1,2,3,4。
CN201410349103.9A 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning Expired - Fee Related CN104123542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410349103.9A CN104123542B (en) 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410349103.9A CN104123542B (en) 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning

Publications (2)

Publication Number Publication Date
CN104123542A CN104123542A (en) 2014-10-29
CN104123542B true CN104123542B (en) 2017-06-27

Family

ID=51768947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410349103.9A Expired - Fee Related CN104123542B (en) 2014-07-18 2014-07-18 A kind of devices and methods therefor of hub workpiece positioning

Country Status (1)

Country Link
CN (1) CN104123542B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN105423975B (en) * 2016-01-12 2018-02-09 济南大学 The calibration system and method for a kind of large-scale workpiece
CN105976358B (en) * 2016-04-27 2018-07-27 北京以萨技术股份有限公司 A method of the fast convolution for the more convolution kernels of feature pyramid calculates
CN106325205B (en) * 2016-09-20 2019-01-25 图灵视控(北京)科技有限公司 A kind of hub installing hole flexibility automatic processing system based on machine vision
CN109427050B (en) * 2017-08-23 2022-04-29 阿里巴巴集团控股有限公司 Guide wheel quality detection method and device
CN107866386B (en) * 2017-09-30 2020-10-16 绿港环境资源股份公司 Perishable waste identification system and method
CN107862690B (en) * 2017-11-22 2023-11-14 佛山科学技术学院 Circuit board component positioning method and device based on feature point matching
CN108491841A (en) * 2018-03-21 2018-09-04 东南大学 A kind of automotive hub type identification monitoring management system and method
CN108665057A (en) * 2018-03-29 2018-10-16 东南大学 A kind of more production point wheel hub image classification methods based on convolutional neural networks
CN109060262A (en) * 2018-09-27 2018-12-21 芜湖飞驰汽车零部件技术有限公司 A kind of wheel rim weld joint air-tight detection device and air-tightness detection method
CN109592433B (en) * 2018-11-29 2021-08-10 合肥泰禾智能科技集团股份有限公司 Goods unstacking method, device and system
CN109871854B (en) * 2019-02-22 2023-08-25 大连工业大学 Quick hub identification method
CN111191708A (en) * 2019-12-25 2020-05-22 浙江省北大信息技术高等研究院 Automatic sample key point marking method, device and system
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium
CN111687444B (en) * 2020-06-16 2021-04-30 浙大宁波理工学院 Method and device for identifying and positioning automobile hub three-dimensional identification code
CN112198161A (en) * 2020-10-10 2021-01-08 安徽和佳医疗用品科技有限公司 PVC gloves real-time detection system based on machine vision
CN112883963B (en) * 2021-02-01 2022-02-01 合肥联宝信息技术有限公司 Positioning correction method, device and computer readable storage medium
CN113432585A (en) * 2021-06-29 2021-09-24 沈阳工学院 Non-contact hub position accurate measurement method based on machine vision technology
CN113591923A (en) * 2021-07-01 2021-11-02 四川大学 Engine rocker arm part classification method based on image feature extraction and template matching
CN113720280A (en) * 2021-09-03 2021-11-30 北京机电研究所有限公司 Bar center positioning method based on machine vision
CN114800533B (en) * 2022-06-28 2022-09-02 诺伯特智能装备(山东)有限公司 Sorting control method and system for industrial robot
CN116977341B (en) * 2023-09-25 2024-01-09 腾讯科技(深圳)有限公司 Dimension measurement method and related device
CN117058151B (en) * 2023-10-13 2024-01-05 山东骏程金属科技有限公司 Hub detection method and system based on image analysis

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799859A (en) * 2012-06-20 2012-11-28 北京交通大学 Method for identifying traffic sign
CN103077512A (en) * 2012-10-18 2013-05-01 北京工业大学 Feature extraction and matching method and device for digital image based on PCA (principal component analysis)
WO2014061221A1 (en) * 2012-10-18 2014-04-24 日本電気株式会社 Image sub-region extraction device, image sub-region extraction method and program for image sub-region extraction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3576987B2 (en) * 2001-03-06 2004-10-13 株式会社東芝 Image template matching method and image processing apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799859A (en) * 2012-06-20 2012-11-28 北京交通大学 Method for identifying traffic sign
CN103077512A (en) * 2012-10-18 2013-05-01 北京工业大学 Feature extraction and matching method and device for digital image based on PCA (principal component analysis)
WO2014061221A1 (en) * 2012-10-18 2014-04-24 日本電気株式会社 Image sub-region extraction device, image sub-region extraction method and program for image sub-region extraction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于图像匹配技术的轮毂定位方法;李丹丹;《学术资源发现平台》;20140616;第1页 *
基于改进SIFT算法的图像匹配方法;程德志等;《计算机仿真》;20110731;第28卷(第7期);第285-289页 *
基于面阵CCD的轮毂形位参数检测方法;乐莹 等;《科技通报》;20090331;第25卷(第2期);第196-201页 *

Also Published As

Publication number Publication date
CN104123542A (en) 2014-10-29

Similar Documents

Publication Publication Date Title
CN104123542B (en) A kind of devices and methods therefor of hub workpiece positioning
CN110163853B (en) Edge defect detection method
CN110148162B (en) Heterogeneous image matching method based on composite operator
CN104574421B (en) Large-breadth small-overlapping-area high-precision multispectral image registration method and device
CN103308430B (en) A kind of method and device measuring thousand grain weigth
CN111126174A (en) Visual detection method for robot to grab parts
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN110910350B (en) Nut loosening detection method for wind power tower cylinder
CN101650784B (en) Method for matching images by utilizing structural context characteristics
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN111862037A (en) Method and system for detecting geometric characteristics of precision hole type part based on machine vision
CN111062940B (en) Screw positioning and identifying method based on machine vision
CN107153848A (en) Instrument image automatic identifying method based on OpenCV
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN103210417A (en) Method for the pre-processing of a three-dimensional image of the surface of a tyre using successive B-spline deformations
JP2020161129A (en) System and method for scoring color candidate poses against color image in vision system
CN111563896A (en) Image processing method for catenary anomaly detection
JP5023238B2 (en) How to select the best trace area for shell case-based automatic region segmentation and shell case comparison
TWI543117B (en) Method for recognizing and locating object
CN105825215B (en) It is a kind of that the instrument localization method of kernel function is embedded in based on local neighbor and uses carrier
CN113436262A (en) Vision-based vehicle target position and attitude angle detection method
CN113705564A (en) Pointer type instrument identification reading method
CN108269264B (en) Denoising and fractal method of bean kernel image
CN117314986A (en) Unmanned aerial vehicle cross-mode power distribution equipment inspection image registration method based on semantic segmentation
CN104655041A (en) Industrial part contour line multi-feature extracting method with additional constraint conditions

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170627