CN109815822B - Patrol diagram part target identification method based on generalized Hough transformation - Google Patents

Patrol diagram part target identification method based on generalized Hough transformation Download PDF

Info

Publication number
CN109815822B
CN109815822B CN201811612262.8A CN201811612262A CN109815822B CN 109815822 B CN109815822 B CN 109815822B CN 201811612262 A CN201811612262 A CN 201811612262A CN 109815822 B CN109815822 B CN 109815822B
Authority
CN
China
Prior art keywords
image
point
inspection
edge
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811612262.8A
Other languages
Chinese (zh)
Other versions
CN109815822A (en
Inventor
赵戊辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD
Original Assignee
BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD filed Critical BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD
Priority to CN201811612262.8A priority Critical patent/CN109815822B/en
Publication of CN109815822A publication Critical patent/CN109815822A/en
Application granted granted Critical
Publication of CN109815822B publication Critical patent/CN109815822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for identifying a target of a part of a patrol chart based on generalized Hough transformation, which comprises the following steps: step A: selecting reference points in the inspection, and positioning each reference point after the selection is completed; and (B) step (B): after positioning the reference points, establishing an R-table matrix according to the number and the distance of the reference points; step C, building a Hough space according to the R-table matrix; step D: selecting a maximum peak point in the Hough space as an optimal reference point to form a reference image; step E: searching reference coordinates in the inspection chart by using a generalized Hough transformation method, extracting images from the inspection chart according to the reference coordinates, and performing mutual information calculation by using the extracted images and the reference images to judge whether the part is missing. According to the invention, the generalized Hough transformation is used for identifying the parts in the inspection chart, so that whether the parts in the inspection chart are missing or not can be rapidly and accurately judged, and the real-time detection of the equipment in the inspection process of the unmanned aerial vehicle is realized.

Description

Patrol diagram part target identification method based on generalized Hough transformation
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for recognizing targets of parts of a patrol chart based on generalized Hough transformation.
Background
With the rapid development of computer technology, the object recognition technology has rapidly developed into a very important tool and means, and the application range of the tool and means is wider and wider. However, since the targets of related parts such as the nut hand wheel are affected by the environment, the shooting angle and the conditions thereof, the targets are generally difficult to express in an analytic mode, so that the recognition of the targets is a very difficult task. So far, researchers aim at the target identification of the relevant parts of the nut hand wheel, mainly aim at the ideal situation, the relevant parts of the nut hand wheel shot at the front are identified, the main application method is that the inner circle of the relevant parts of the nut hand wheel is detected through circular Hough transformation, and the positions of the parts are separated through color. The identification of relevant parts of a nut hand wheel on a large machine for real unmanned aerial vehicle inspection space-to-ground shooting is less, and the judgment of the relevant part missing is also less.
Chinese patent publication No.: CN103635169a discloses a defect detection system comprising: an image processing unit configured to acquire a morphological image of the absorbent article showing a morphology of the absorbent article after processing in each of the plurality of steps, a defect detecting unit configured to detect whether or not the processed absorbent article has a defective portion based on the morphological image acquired by the image processing unit, and an image display unit configured to display an image of the processed absorbent article when the defective portion of the absorbent article is detected by the defect detecting unit. It follows that the detection system has the following problems:
Firstly, the detection system is only applied to a production line, and can be used for detecting whether the quality of parts meets the regulation or not by fixing the position of a product, so that accurate detection can not be realized outdoors;
secondly, the detection system only uses a camera to collect images of parts with fixed placement positions, when products are placed unevenly or the positions of the products change, the actually collected images cannot be registered with reference images, and the detection effect is not ideal;
Thirdly, the detection method cannot process the edges of the image, so that the detection method cannot accurately judge the shape of the part;
Fourth, when the detection system detects the defects, whether the defects or the loss of the parts occur is judged only through comparison of morphological images, and the detection result is inaccurate.
Disclosure of Invention
Therefore, the invention provides a method for identifying the targets of parts of a patrol chart based on generalized Hough transformation, which is used for solving the problem that the prior art cannot accurately detect in real time outdoors.
Compared with the prior art, the method has the beneficial effects that the parts in the inspection chart can be identified by using generalized Hough transformation, so that whether the parts in the inspection chart are missing or not can be judged rapidly, and the real-time detection of the equipment in the inspection process of the unmanned aerial vehicle is realized.
Furthermore, the method can select the points in the identified edge image when selecting the reference point, and can also select the points of the non-edge image, and the detection range of the method is increased by increasing the selection range of the reference point.
In particular, the method can be used for detecting irregularly-shaped parts and parts with changed shapes due to environmental influence by selecting the coordinates of the pixels at the left upper corner of the template picture as a reference point, so that the identification efficiency of the method is further improved.
Furthermore, when the Hough space is established, the space size of the Hough space is the same as that of the inspection picture, so that the images can be directly compared when the images are compared later, the time for adjusting the image size when the images are compared is saved, and the use efficiency of the method is improved.
Further, when the reference coordinates are selected from the inspection images, the coordinates of the upper left corner of the template are selected, and the positions of the coordinates are selected uniformly, so that the relative positions of the images are unified, and the use efficiency of the method is improved.
Drawings
FIG. 1 is a flow chart diagram of a method for identifying targets of parts of a patrol chart based on generalized Hough transformation;
FIG. 2 is a registration diagram of an embodiment of the present invention after registering a field inspection diagram and a reference diagram;
FIG. 3 is an edge map obtained by edge extraction of a registration map according to an embodiment of the present invention;
Fig. 4 is a target extraction diagram of extracting a target part in a designated area according to an embodiment of the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The above and further technical features and advantages of the present invention are described in more detail below with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a method for identifying a target of a part of a patrol chart based on generalized Hough transform according to the invention is shown, which comprises:
step A: selecting reference points in the inspection, and positioning each reference point after the selection is completed;
and (B) step (B): after positioning the reference points, establishing an R-table matrix according to the number and the distance of the reference points;
step C, building a Hough space according to the R-table matrix;
step D: selecting a maximum peak point in the Hough space as an optimal reference point to form a reference image;
Step E: searching reference coordinates in the inspection chart by using a generalized Hough transformation method, extracting images from the inspection chart according to the reference coordinates, and performing mutual information calculation by using the extracted images and the reference images to judge whether the part is missing.
Specifically, the selection of the reference point in the step a may be any point in the edge image in the template image, including the position of the non-edge point, and the center point of the edge image is generally selected. Because the detected shape is generally irregular due to the environmental influence at the edge of the related part of the nut hand wheel, the coordinate of the pixel at the upper left corner of the template picture is finally selected as the reference point.
Specifically, the method for establishing the Hough space in the step C is as follows: for each edge point (x i,yi) in the inspection edge picture, calculating a discrete value k corresponding to the gradient according to the rule, and k epsilon [1,30], and for each edge point falling into k in the template edge picture, calculating a reference coordinate (x' c,y'c) corresponding to the edge point conforming to the inspection edge picture according to the R-table, and establishing a Hough space H.
Specifically, the method for selecting the maximum peak point in the Hough space in the step D includes:
Step E1: point X r=(xr,yr is selected within the edge image in the inspection map), point x= (X, y) is selected at the edge image boundary, at which point the vector difference r between points X r and X is calculated: r=x r -X.
Step E2: let the included angle between r and x axis beThe distance from the point X r to the boundary point X is r, and the boundary orientation θ of the boundary point X is calculated:
Dividing the range of possible values of θ into discrete k states iΔθ, i=1, 2, 3..k is denoted θ k =kΔθ, where Δθ is the angular increment of the direction parameter defining θ k;
step E3: establishing a relation lookup table R for the positions of the reference points determined by all the convenient points, and if the points (x, y) on the boundary of one area have parameters rθ k, R θ k is the shape characteristic of the boundary curve;
Step E4: each point on the boundary of the region is calculated to obtain the maximum value r * in { A (x r,yr) }:
specifically, in the step E, when the reference coordinate is selected from the inspection image, the reference coordinate is mapped to the coordinate of the upper left corner of the template image.
Example 1
The embodiment can detect whether the hand wheel part in the field device is missing or not, and comprises three parts: registering the inspection map and the reference map based on SURF features, extracting the edges of the registration map based on Canny operator and identifying the targets of the parts based on generalized Hough transformation; when the field device is detected, firstly registering the acquired inspection image and the reference image by using a SURF algorithm, and extracting the edge of the registered image by using a Canny operator after registering is finished so as to output an edge image; and after the output is finished, performing mutual information calculation on the edge image and the template image by using generalized Hough transformation, and judging whether the image is missing or not according to the calculated mutual information value. It can be understood that the detection method of the embodiment not only can be used for judging whether the hand wheel in the device is missing or not, but also can be used for judging whether the nut or other parts in the image are missing or not, as long as the condition that all parts of the embodiment can reach the designated working state is met.
Specifically, in this embodiment, the registering of the inspection map and the reference map by the SURF features includes SURF feature extraction, euclidean distance ordering between feature points, euclidean distance screening of a registration line, and affine matrix calculation, including:
Step 1.1: SURF feature extraction is carried out on the image, and the SURF feature extraction comprises the steps of establishing an integral image, constructing an approximate Hessian matrix, constructing a scale space and accurately positioning feature points;
Step 1.2: features extracted by using a SURF algorithm are provided with scale and rotation invariance, and the SURF operator has strong robustness to image noise, light change, affine deformation and the like, so that extracted feature descriptors are very close to objects in the inspection map and the reference map, and distance measurement is needed. After SURF feature extraction is carried out on the inspection graph and the reference graph, the Euclidean distance method is used as similarity measurement;
Step 1.3: the inspection image and the reference image are arranged in parallel, the positions of the matching point pairs and the corresponding connecting straight lines of the matching point pairs are respectively marked in the inspection image and the reference image, the measurement of the alignment line distance is carried out according to the obtained matching point pairs of the Euclidean distance of the characteristic points, because the inspection image and the reference image are shot by adopting the same camera, the format and the size of the shot pictures are the same, the measurement method still selects the Euclidean distance method, and the second screening of the alignment point pairs is carried out near the average value of the alignment line;
Step 1.4: calculating affine transformation matrix according to the positions of the matching point pairs in the images, and finally realizing registration from the inspection image to the reference image by utilizing the matrix;
The registration pairs are screened by adopting a registration line Euclidean distance method, and affine matrix calculation is performed for registration, as shown in fig. 2, and the result is high in registration accuracy without missing, but the general distortion-free registration requirement can be realized.
Specifically, the second part of the invention extracts the edges of the image through a Canny operator, and comprises the steps of smoothing the input image by a Gaussian filter, calculating a gradient amplitude image and an angle image, carrying out non-maximum inhibition, double-threshold processing and connection analysis on the gradient amplitude image, and comprises the following steps:
Step 2.1: smoothing the inspection graph and the template graph by using a Gaussian filter, wherein in the Canny detection process, the derivative calculation process of the image is carried out, the derivative calculation result is not robust to noise, is sensitive to noise, so that smoothing is carried out, amplification of noise values is not caused, more false points are caused, false edges become more, adverse effects are caused on the extraction of the edges, but smoothing filtering and edge detection are mutually contradictory, and the smoothing filtering can effectively inhibit noise interference, but also blur the edges of the image, so that uncertainty is caused to the subsequent edge positioning operation, and a better compromise scheme can be provided between two contradictory bodies of noise removal and edge accurate detection positioning according to the result of actual engineering experience of the former;
Step 2.2: extracting the edge of the image, wherein the transformation of the pixel value along the edge direction is slower, and the pixel value variation in the normal direction perpendicular to the edge direction is more severe, because the color variation among objects, scenes, areas and the like is generally involved, a powerful method is provided for calculating the variation on the edge by a differential operator, in the practical engineering, the first derivative or the second derivative of the image is used for detecting the edge, whether an edge point can be detected by the first derivative method, namely, whether the point is on a slope is judged, and then the second derivative method can be used for judging whether an edge pixel point belongs to a bright side or a dark side, and the first derivative of the image, namely, the finite difference is used for determining the gradient amplitude and the direction of the image;
Step 2.3: the local gradient areas in the global gradient map are subjected to non-maximum suppression, the global gradient obtained by the processing does not mean a real edge, in order to determine the edge, the local gradient areas in the global gradient map are subjected to non-maximum suppression so as to reserve points with the maximum local gradient, the larger the value in the corresponding image gradient amplitude matrix is, the larger the gradient value is, which belongs to one of image enhancement methods and cannot represent the point, namely an edge point, the non-maximum suppression is an important part in a Canny edge detection method, namely a local maximum value in a gradient image is found, the pixel value of the point in the corresponding original image is reserved, and the gray value of the point corresponding to the non-maximum point is set to 0;
step 2.4: the method is characterized in that the method is used for processing by a threshold method to reduce false edge points, the threshold method is proposed to further extract real edge points, if only one threshold is used, point values lower than the value are set to zero, false edges with too low threshold can occur, the actual edge points with too high threshold can be deleted by mistake, and in order to improve the situation, two thresholds with high and low levels are used;
Step 2.5: after thresholding, connectivity analysis is required to form longer edge lines, an edge pixel point that has not been accessed is found in the image, and all weak pixels in the image are connected to the point using 8 connectivity to form the final edge image.
The edge extraction of the registered picture by the Canny operator is shown in fig. 3, and the edge of the picture extracted by using the Canny operator is also clear as can be obtained according to fig. 3.
The third part of the invention is the target identification of the parts based on generalized Hough transformation, comprising the steps of reference point position selection, R-table establishment, hough space establishment, peak value positioning and mutual information calculation, and comprising the following steps:
Step 3.1: the reference point can be any point in the template edge image, including the position of a non-edge point, the center point of the template edge image is generally selected, and the detected shape is generally irregular under the influence of the environment due to the edge of the hand wheel part, so that the coordinate of the pixel at the upper left corner of the template picture is finally selected as the reference point;
step 3.2: establishing an R-table matrix according to the total edge points of the template edge image and the distance reference quantity;
step 3.3: calculating a discrete value corresponding to the gradient according to the rule for each edge point in the inspection edge picture, and calculating a reference coordinate corresponding to the edge point which accords with the edge point in the inspection edge picture according to the R-table for each edge point in the template edge picture to establish a Hough space, wherein the space size is the same as the inspection picture space size;
Step 3.4: and searching the maximum peak point in the Hough space, namely searching the optimal reference point.
Step 3.5: and (3) finding a reference coordinate in the inspection image by using a generalized Hough transformation method, mapping the reference coordinate to the coordinate of the upper left corner of the template, extracting a picture with the size of the template from the inspection image by using the position of the reference coordinate, performing mutual information calculation with the template image, and judging whether the part is missing or not according to the calculated mutual information value.
In order to verify the reliability of the partial method, the embodiment extracts the template from the reference image and identifies the reference image first, the obtained template is used for identifying very accurately by using a generalized Hough transformation method, the hand wheel identification result after registration is shown in FIG. 4, and the partial method can be obtained according to FIG. 4, so that the identification of the target in the image is very accurate. When the hand wheel is missing, the positioning information obtained by generalized Hough transformation is used for extracting the area with the size of the template, and mutual information calculation is carried out with the template picture, so that whether the hand wheel is missing or not can be judged.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. The method for identifying the targets of the parts of the inspection chart based on the generalized Hough transformation is characterized by applying the generalized Hough transformation image identification method to the equipment inspection chart for real-time detection, and comprises the following steps:
step A: selecting reference points in the inspection, and positioning each reference point after the selection is completed;
and (B) step (B): after positioning the reference points, establishing an R-table matrix according to the number and the distance of the reference points;
step C, building a Hough space according to the R-table matrix;
step D: selecting a maximum peak point in the Hough space as an optimal reference point to form a reference image;
step E: searching reference coordinates in the inspection image by using a generalized Hough transformation method, extracting an image from the inspection image according to the reference coordinates, and performing mutual information calculation by using the extracted image and the reference image to judge whether the part is missing or not;
The selection of the reference points in the step A comprises the points in the edge image and the points in the non-edge image;
in the step A, selecting the pixel coordinate of the upper left corner of the template picture as a reference point to detect irregularly-shaped parts;
In the step E, when a reference coordinate is selected from the inspection image, the reference coordinate is mapped to the coordinate of the upper left corner of the template image correspondingly;
The step a further includes registering the inspection map with the reference map by SURF features, including:
Step 1.1: SURF feature extraction is carried out on the image, and the SURF feature extraction comprises the steps of establishing an integral image, constructing an approximate Hessian matrix, constructing a scale space and accurately positioning feature points;
Step 1.2: performing SURF feature extraction on the inspection graph and the reference graph, and using a Euclidean distance method as similarity measurement;
Step 1.3: arranging the inspection graph and the reference graph in parallel, respectively marking the positions of the matching point pairs and the corresponding connecting straight lines in the inspection graph and the reference graph, and measuring the alignment line distance according to the obtained matching point pairs of the Euclidean distance of the characteristic points;
Step 1.4: calculating affine transformation matrix according to the positions of the matching point pairs in the images, and finally realizing registration from the inspection image to the reference image by utilizing the matrix;
the step A also comprises the steps of extracting the edges of the image through a Canny operator, and comprises the following steps:
step 2.1: smoothing the inspection map and the template map by using a Gaussian filter;
step 2.2: extracting the edges of the image;
Step 2.3: performing non-maximum suppression on each local gradient region in the global gradient map, finding out a local maximum value in the gradient image, reserving a pixel value of the point in the corresponding original image, and setting a gray value of the point corresponding to the non-maximum point to 0;
Step 2.4: processing by using a threshold method to reduce false edge points;
step 2.5: after thresholding, connecting all weak pixels in the image to the point using 8 connectivity to form a final edge image;
The method for establishing the Hough space in the step C comprises the following steps: for each edge point (x i,yi) in the inspection edge picture, calculating a discrete value k corresponding to the gradient according to the rule, and k epsilon [1,30], and for each edge point falling into k in the template edge picture, calculating a reference coordinate (x' c,y'c) corresponding to the edge point conforming to the inspection edge picture according to the R-table, and establishing a Hough space H.
2. The method for identifying the targets of the parts of the inspection chart based on the generalized Hough transformation according to claim 1, wherein the Hough space H is the same as the inspection chart space in size, and the time spent in the comparison process is reduced by unifying the image sizes.
3. The method for identifying the target of the inspection chart component based on the generalized Hough transformation according to claim 1, wherein the method for selecting the maximum peak point in the Hough space in the step D comprises the following steps:
Step E1: selecting a point X r=(xr,yr in the edge image in the inspection map), selecting a point x= (X, y) at the edge image boundary, and calculating a vector difference r between the points Xr and X: r=x r -X;
step E2: let the included angle between r and x axis be The distance from the point X r to the boundary point X is r, and the boundary orientation θ of the boundary point X is calculated:
Dividing the θ value range into discrete k states iΔθ, i=1, 2, 3..k is denoted θ k =kΔθ, where Δθ is the angular increment of the direction parameter defining θ k;
Step E3: establishing a relation lookup table R for the positions of the reference points determined by all the convenient points, and if the points (x, y) on the boundary of one area have parameters rθk, taking the rθk as the shape characteristic of the boundary curve;
step E4: each point on the boundary of the region is calculated to obtain the maximum value r in { a (xr, yr) }:
CN201811612262.8A 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation Active CN109815822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811612262.8A CN109815822B (en) 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811612262.8A CN109815822B (en) 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation

Publications (2)

Publication Number Publication Date
CN109815822A CN109815822A (en) 2019-05-28
CN109815822B true CN109815822B (en) 2024-05-28

Family

ID=66602538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811612262.8A Active CN109815822B (en) 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation

Country Status (1)

Country Link
CN (1) CN109815822B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412609B (en) * 2019-07-11 2020-09-18 郑州航空工业管理学院 Multi-pulse laser radar target detection method
CN112435223B (en) * 2020-11-11 2021-11-23 马鞍山市瀚海云星科技有限责任公司 Target detection method, device and storage medium
CN113269767B (en) * 2021-06-07 2023-07-18 中电科机器人有限公司 Batch part feature detection method, system, medium and equipment based on machine vision
CN114359548A (en) * 2021-12-31 2022-04-15 杭州海康机器人技术有限公司 Circle searching method and device, electronic equipment and storage medium
CN114549438B (en) * 2022-02-10 2023-03-17 浙江大华技术股份有限公司 Reaction kettle buckle detection method and related device
CN116051629B (en) * 2023-02-22 2023-11-07 常熟理工学院 Autonomous navigation robot-oriented high-precision visual positioning method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456005A (en) * 2013-08-01 2013-12-18 华中科技大学 Method for matching generalized Hough transform image based on local invariant geometrical characteristics
CN104778707A (en) * 2015-04-22 2015-07-15 福州大学 Electrolytic capacitor detecting method for improving general Hough transform
CN106683075A (en) * 2016-11-22 2017-05-17 广东工业大学 Power transmission line tower cross arm bolt defect detection method
CN108550165A (en) * 2018-03-18 2018-09-18 哈尔滨工程大学 A kind of image matching method based on local invariant feature

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8805117B2 (en) * 2011-07-19 2014-08-12 Fuji Xerox Co., Ltd. Methods for improving image search in large-scale databases

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456005A (en) * 2013-08-01 2013-12-18 华中科技大学 Method for matching generalized Hough transform image based on local invariant geometrical characteristics
CN104778707A (en) * 2015-04-22 2015-07-15 福州大学 Electrolytic capacitor detecting method for improving general Hough transform
CN106683075A (en) * 2016-11-22 2017-05-17 广东工业大学 Power transmission line tower cross arm bolt defect detection method
CN108550165A (en) * 2018-03-18 2018-09-18 哈尔滨工程大学 A kind of image matching method based on local invariant feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image matching under generalized hough transform;Qiang Li等;《IADIS International Conference on Applied Computing 2005》;20051231;全文 *
基于改进 Canny算法的图像边缘检测的研究;靳艳红;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;正文第 18-32页 *
颜孙溯.基于机器视觉的PCB板有极性电子元件检测算法研究.《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》.2018,正文第9-70页. *

Also Published As

Publication number Publication date
CN109815822A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN109815822B (en) Patrol diagram part target identification method based on generalized Hough transformation
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
CN108776140B (en) Machine vision-based printed matter flaw detection method and system
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN111968172A (en) Method and system for measuring volume of material in stock ground
CN109816674A (en) Registration figure edge extracting method based on Canny operator
CN115908269B (en) Visual defect detection method, visual defect detection device, storage medium and computer equipment
CN105865329B (en) The acquisition system and method for the bundled round steel end face center coordinate of view-based access control model
KR100823549B1 (en) Recognition method of welding line position in shipbuilding subassembly stage
CN106709500B (en) Image feature matching method
CN112037203A (en) Side surface defect detection method and system based on complex workpiece outer contour registration
CN109911481B (en) Cabin frame target visual identification and positioning method and system for metallurgical robot plugging
Gao et al. A robust pointer meter reading recognition method for substation inspection robot
CN105718931B (en) System and method for determining clutter in acquired images
WO2021000948A1 (en) Counterweight weight detection method and system, and acquisition method and system, and crane
CN109727239A (en) Based on SURF feature to the method for registering of inspection figure and reference map
CN112164050B (en) Method and device for detecting surface defects of products on production line and storage medium
CN102706291A (en) Method for automatically measuring road curvature radius
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN104574312A (en) Method and device of calculating center of circle for target image
CN111242888A (en) Image processing method and system based on machine vision
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN110807354B (en) Industrial assembly line product counting method
CN114998571A (en) Image processing and color detection method based on fixed-size marker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant