CN109815822A - Inspection figure components target identification method based on Generalized Hough Transform - Google Patents

Inspection figure components target identification method based on Generalized Hough Transform Download PDF

Info

Publication number
CN109815822A
CN109815822A CN201811612262.8A CN201811612262A CN109815822A CN 109815822 A CN109815822 A CN 109815822A CN 201811612262 A CN201811612262 A CN 201811612262A CN 109815822 A CN109815822 A CN 109815822A
Authority
CN
China
Prior art keywords
inspection
point
image
edge
hough transform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811612262.8A
Other languages
Chinese (zh)
Other versions
CN109815822B (en
Inventor
赵戊辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD
Original Assignee
BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD filed Critical BEIJING AEROSPACE FUDAO HIGH-TECH CO LTD
Priority to CN201811612262.8A priority Critical patent/CN109815822B/en
Publication of CN109815822A publication Critical patent/CN109815822A/en
Application granted granted Critical
Publication of CN109815822B publication Critical patent/CN109815822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of inspection figure components target identification method based on Generalized Hough Transform, comprising: step A: choosing the reference point in inspection, after the completion of selection, positions to each reference point;Step B: after positioning to reference point, R-table matrix is established according to reference point number and distance;Step C: the space Hough is established according to the R-table matrix;Step D: maximal peak point is chosen in the space Hough as optimal reference point, forms reference picture;Step E: finding reference coordinate in inspection figure using Generalized Hough Transform method, image is extracted from inspection figure according to reference coordinate, and carry out mutual information calculating using the image and reference picture extracted, to judge whether part lacks.By the present invention in that being identified with Generalized Hough Transform to the part in inspection figure, quickly can accurately it be judged whether part in inspection figure lacks, to realize unmanned plane during inspection to the real-time detection of equipment.

Description

Inspection figure components target identification method based on Generalized Hough Transform
Technical field
The present invention relates to image identification technical field more particularly to zero, a kind of inspection figure based on Generalized Hough Transform Part target identification method.
Background technique
With the fast development of computer technology, target identification technology has rapidly developed into a kind of very important Tool and means, application range are also increasingly wider.However since the related components such as nut handwheel target is by environment, shooting Angle and own situation influence, and are generally difficult to be expressed with analytic expression, so that being identified to it as a unusual difficult task.It arrives So far, researcher is directed to nut handwheel related components target identification, primarily directed to ideally, front shooting Nut handwheel related components are identified that main method to be applied is that round Hough transform detects nut handwheel correlation zero Part inner circle, color segmentation go out components position.For the nut on the big machinery of the real air to surface shooting of unmanned plane inspection The identification of handwheel related components, it is fewer and fewer, and also the relevant judgement of components missing is also seldom.
China Patent Publication No.: CN103635169A discloses a kind of defect detecting system, comprising: image processing unit, Defect detection unit and image-display units, wherein the image processing unit is configured to obtain the form of absorbent commodity Image, the morphological image of the absorbent commodity show the shape after the absorbent commodity processing in each step of multiple steps State, the defect detection unit are configured to detect the absorbability after processing based on the morphological image obtained by image processing unit Article whether there is rejected region, which, which is configured to work as, detects absorbent commodity by defect detection unit The image of absorbent commodity after showing processing when rejected region.It can be seen that the detection system has the following problems:
First, the detection system is only applied to assembly line, and by the position of fixed product, whether detection components quality Meet regulation, can not accomplish accurate detection outdoors;
Second, the detection system is used only camera and carries out Image Acquisition to the fixed part in placement position, when product is put When putting uneven or product space and changing, the image of actual acquisition can not be registrated with benchmark image, detection effect is not It is ideal;
Third, the detection method can not be handled the edge of image, cause it that can not carry out to the shape of part Precisely judgement;
4th, the detection system only determines part by the comparison of morphological image when being detected for defect Whether defect or loss are occurred, and testing result is not accurate.
Summary of the invention
For this purpose, the present invention provides a kind of inspection figure components target identification method based on Generalized Hough Transform, to gram The problem of clothes can not carry out accurate real-time detection outdoors in the prior art.
Compared with prior art, the beneficial effects of the present invention are by the present invention in that with Generalized Hough Transform to inspection Part in figure is identified, quickly can accurately be judged whether part in inspection figure lacks, to realize nobody Machine is during inspection to the real-time detection of equipment.
Further, the method for the invention can be chosen inside the edge image recognized when choosing reference point Point also can choose the point of non-edge image, by increasing the selection range of reference point, increase the inspection of the method for the invention Survey range.
In particular, the method is as a reference point by the coordinate for choosing template picture top left corner pixel, this method can be made For detecting the part and part that is affected by environment and changing shape of irregular shape, the method is further improved Recognition efficiency.
Further, the method for the invention makes the space of its space size Yu inspection picture when establishing the space Hough Size is identical, in this way, it is subsequent image is compared when, directly image can be compared, saved in contrast images When adjustment of image size time, to improve the service efficiency of the method.
Further, when choosing reference coordinate in inspection image, the coordinate of template upper left angle point is chosen, is selected by unified The position of coordinate is taken, so that the relative position of unified each image, improves the service efficiency of the method.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow diagrams of the inspection figure components target identification method of Generalized Hough Transform;
Fig. 2 is the registration figure after the embodiment of the present invention is registrated live inspection figure and reference map;
Fig. 3 is that the embodiment of the present invention schemes the edge graph after progress edge extracting to registration;
Fig. 4 is the Objective extraction figure that the embodiment of the present invention extracts target part in specified region.
Specific embodiment
In order to which objects and advantages of the present invention are more clearly understood, the present invention is further retouched below with reference to embodiment It states;It should be appreciated that specific embodiment described herein is used only for explaining the present invention, it is not intended to limit the present invention.
Below in conjunction with attached drawing, the forgoing and additional technical features and advantages are described in more detail.
Refering to Figure 1, it is the present invention is based on the inspection figure components target identification methods of Generalized Hough Transform Flow diagram, comprising:
Step A: choosing the reference point in inspection, after the completion of selection, positions to each reference point;
Step B: after positioning to reference point, R-table matrix is established according to reference point number and distance;
Step C: the space Hough is established according to the R-table matrix;
Step D: maximal peak point is chosen in the space Hough as optimal reference point, forms reference picture;
Step E: finding reference coordinate in inspection figure using Generalized Hough Transform method, according to reference coordinate from inspection figure Middle extraction image, and mutual information calculating is carried out using the image and reference picture extracted, to judge whether part lacks.
Specifically, the selection of reference point can be any point in template image inward flange image in the step A, Position including non-edge point is typically chosen the central point of edge image.Due to the edge of nut handwheel related components, because Environment influences, and the shape that detected is usually and irregular, so the coordinate for finally choosing template picture top left corner pixel is ginseng Examination point.
Specifically, the method for establishing the space Hough in the step C are as follows: for each side in inspection edge picture Edge point (xi,yi), the corresponding discrete value k of gradient, and k ∈ [1,30] are calculated by above-mentioned rule, for each in template edge picture A marginal point fallen into k will calculate the corresponding reference coordinate of marginal point met in inspection edge picture according to R-table (x'c,y'c), establish the space Hough H.
Specifically, in the space Hough, selection peak-peak point methods include: in the step D
Step E1: the selected point X in edge image in inspection figurer=(xr,yr), in edge image boundary selected point X= (x, y) calculates point X at this timerPhasor difference r:r=X between Xr-X。
Step E2: set the angle of r and x-axis asPoint XrDistance to boundary point X is r, and is orientated to the boundary of boundary point X θ is calculated:
The possible value range of θ is divided into discrete k kind state i Δ θ, i=1,2,3...k and is denoted as θk=k Δ θ, wherein Δ θ To define θkDirectioin parameter angle step;
Step E3: the reference point locations opening relationships look-up table R determining to all convenient points, if on a zone boundary Point (x, y) all has parameter r θk, then r θkFor the shape feature of the boundary curve;
Step E4: being respectively calculated each point on zone boundary, finds out { A (xr,yr) in maximum value r*:
Specifically, when choosing reference coordinate in inspection image, correspondence mappings are Prototype drawing in the step E The coordinate of the upper left angle point of picture.
Embodiment 1
The present embodiment can be detected to whether the handwheel components in field device lack, including three parts: be based on SURF feature is become to being registrated of inspection figure and reference map, the registration figure edge extracting based on Canny operator and based on broad sense Hough The components target identification changed;When being detected to field device, first with SURF algorithm to collected inspection figure and base Quasi- figure is registrated, and is extracted after the completion of registration using edge of the Canny operator to registration figure, to export edge image;It is defeated Using Generalized Hough Transform to edge image and template image progress mutual information calculating after the completion of out, and according to calculating The size of association relationship can determine whether to lack.It is understood that detection method described in the present embodiment cannot be only used for pair Whether handwheel lacks and is judged in equipment, can also in image nut or other parts whether lack and judge, as long as Its specified working condition can be reached by meeting the present embodiment each section.
Specifically, first part is registrated inspection figure with reference map by SURF feature in the present embodiment, including Euclidean distance sequence between SURF feature extraction, characteristic point is calculated with the screening of directrix Euclidean distance and affine matrix, comprising:
Step 1.1: to image carry out SURF feature extraction, comprising establish integral image, construction approximation Hessian matrix, Tectonic scale space and precise positioning feature point;
Step 1.2: the feature extracted using SURF algorithm, SURF operator have scale, rotational invariance, and right There is very strong robustness in picture noise, light variation, affine deformation etc., so for identical object in inspection figure and reference map Body, the Feature Descriptor extracted be very close to, with distance metric.SURF feature is carried out to inspection figure and reference map After extraction, use Euclidean distance method as similarity measurement;
Step 1.3: making inspection figure and reference map parallel arranged, demarcate matching double points in inspection figure and reference map respectively Position and its corresponding connection straight line carry out the degree of registration linear distance according to the matching double points of characteristic point Euclidean distance obtained Amount because inspection figure and reference map are shot using same camera, the format and size for the photo shot be it is the same, Measure still selects Euclidean distance method, with the programmed screening for carrying out registration point pair near directrix average value;
Step 1.4: the matching double points selected to finishing screen calculate affine change according to the position of each in the picture Matrix is changed, the registration by inspection figure to reference map is finally realized using the matrix;
By taking the screening for matching directrix Euclidean distance method to carry out registration pair, and the calculating for carrying out affine matrix is used to Registration, although as shown in Fig. 2, result is not high without the registration accuracy in the case of missing, be also able to achieve generally without mistake Really match alignment request.
Specifically, second part of the present invention is extracted by edge of the Canny operator to image, it include Gauss filter The smooth input picture of wave device, gradient magnitude image and the calculating of angular image, the non-maximum restraining of gradient magnitude image, dual threshold Processing and linking parsing, comprising:
Step 2.1: Gaussian filter smoothing processing inspection figure and Prototype drawing are utilized, due in Canny detection process, having pair The derivative calculations process of image, and the calculated result of derivative does not have robustness for noise, it is very sensitive to noise, so wanting It is smoothed, just not will cause the amplification of noise figure in this way, noise is more, and imaginary point can be such that false edge becomes with regard to more More, adverse effect will cause to the extraction at edge, but smothing filtering and edge detection are conflicting both sides, because flat Although sliding filtering can effectively inhibit noise jamming, also the edge of image can be made to thicken, this side after allowing for There is uncertainty in edge positioning operation, according to many practical engineering experiences of forefathers as a result, Gaussian filter can be with It accurately detects to position on noise remove and side and a preferable half-way house is provided between the two paradox;
Step 2.2: the edge of image is extracted, it is slower in the transformation along the pixel value in edge direction, And perpendicular in the normal direction of the edge direction pixel value variation just more acutely because generally related to object it Between, the color change between scene, between region etc., differential operator, which provides one kind to calculate the variation on this edge, to be had The method of power in Practical Project, the detection at edge is carried out with the single order of image or second dervative, it is determined whether have marginal point can Whether on the slope with the method for first derivative, that is, to judge this point, then, Second Derivative Methods can be used for light and shade judgement, Namely judge an edge pixel point belong to it is bright on one side or it is dark while the single order local derviation ,ed using image, i.e., it is limited The determination of difference progress image gradient amplitude and direction;
Step 2.3: the inhibition of non-maximum, above-mentioned processing are carried out to each partial gradient region in global gradient map Obtained global gradient is not meant to real edge, at this time in order to determine edge it is necessary to each in global gradient map A partial gradient region carries out the inhibition of non-maximum, thus retain the maximum point of partial gradient, the point in image, corresponding figure As the value in gradient magnitude matrix is bigger, illustrate that its gradient value is bigger, this belongs to one of image enchancing method, can not table Show that this point is exactly marginal point, non-maxima suppression is the pith in Canny edge detection method, is briefly exactly to find Local maximum in gradient image retains the pixel value of the point in corresponding original image, and non-maximum point is corresponding The gray value of point set 0;
Step 2.4: handled using threshold method to reduce pseudo-edge point, the proposition of threshold method is in order to further Extract true marginal point, if only using a threshold value, lower than the value point value all can zero setting, threshold value is too low at this time Pseudo-edge just will appear, and excessively high actual marginal point can be deleted accidentally, in order to improve this case, use two threshold values of height;
Step 2.5: after threshold process, forming longer edge line and being attached property is then needed to analyze, look in the picture Weak pixels all in image are connected to the point, are formed most by the edge pixel point also not visited to one using 8 connectivity Whole edge image.
By Canny operator to the edge extracting of picture after registration as shown in figure 3, can be obtained according to Fig. 3, by using Image edge after Canny operator extraction is also apparent.
Part III of the present invention be the components target identification based on Generalized Hough Transform, including reference point locations selection, R-table is established, the space Hough is established, peak value positioning and mutual information calculate, comprising:
Step 3.1: the selection of reference point, reference point can be any point in template edge image, including non-edge The position of point, is typically chosen the central point of template edge image, since the edge of handwheel part is under the influence of environment, detected Shape it is usually and irregular, so the coordinate for finally choosing template picture top left corner pixel is reference point;
Step 3.2: with template edge image total edge points and distance reference amount, establishing R-table matrix;
Step 3.3: for each marginal point in inspection edge picture, it is corresponding discrete to calculate gradient by above-mentioned rule Value, for each in template edge picture fall into marginal point, will according to R-table calculate inspection edge picture in accord with The corresponding reference coordinate of the marginal point of conjunction, establishes the space Hough, and space size is identical as inspection picture space size;
Step 3.4: finding the maximal peak point in the space Hough, that is, search out optimal reference point.
Step 3.5: utilizing Generalized Hough Transform method, reference coordinate is found in inspection image, correspondence mappings are template The coordinate of upper left angle point the picture of template size, and and template image are extracted in inspection image with the reference coordinate position Mutual information calculating is done, judges whether part lacks according to the size of the association relationship calculated.
In order to verify the reliability of this part method, the present embodiment to reference map carry out template extraction and first to reference map into Row identification is identified very accurate handwheel recognition result after registration using Generalized Hough Transform method using obtained template As shown in figure 4, can be obtained according to Fig. 4, this part method is also very accurate to the identification of target in the picture.When handwheel missing, benefit The extracted region of template size is carried out with the location information that Generalized Hough Transform obtains, and does mutual information calculating with template picture, It may determine that whether handwheel lacks.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these Technical solution after change or replacement will fall within the scope of protection of the present invention.
The above description is only a preferred embodiment of the present invention, is not intended to restrict the invention;For those skilled in the art For member, the invention may be variously modified and varied.All within the spirits and principles of the present invention, it is made it is any modification, Equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (7)

1. a kind of inspection figure components target identification method based on Generalized Hough Transform, which is characterized in that by broad sense Hough Changing image recognition methods is applied to equipment routing inspection figure to be measured in real time, comprising:
Step A: choosing the reference point in inspection, after the completion of selection, positions to each reference point;
Step B: after positioning to reference point, R-table matrix is established according to reference point number and distance;
Step C: the space Hough is established according to the R-table matrix;
Step D: maximal peak point is chosen in the space Hough as optimal reference point, forms reference picture;
Step E: finding reference coordinate in inspection figure using Generalized Hough Transform method, is mentioned from inspection figure according to reference coordinate Image is taken, and carries out mutual information calculating using the image and reference picture extracted, to judge whether part lacks.
2. the inspection figure components target identification method according to claim 1 based on Generalized Hough Transform, feature exist In the selection of reference point includes the point in the point and non-edge image in edge image in the step A.
3. the inspection figure components target identification method according to claim 2 based on Generalized Hough Transform, feature exist In selection template picture top left corner pixel coordinate is as a reference point in the step A, to detect zero of irregular shape Part.
4. the inspection figure components target identification method according to claim 1 based on Generalized Hough Transform, feature exist In establishing the method in the space Hough in the step C are as follows: for each marginal point (x in inspection edge picturei,yi), by upper It states rule and calculates the corresponding discrete value k of gradient, and k ∈ [1,30], the edge in k is fallen into for each in template edge picture Point will calculate the corresponding reference coordinate (x' of marginal point met in inspection edge picture according to R-tablec,y'c), it establishes The space Hough H.
5. the inspection figure components target identification method according to claim 4 based on Generalized Hough Transform, feature exist In the size of the space the Hough H is identical as inspection picture space size, is compared by unified each picture size with reducing The time spent in journey.
6. the inspection figure components target identification method according to claim 1 based on Generalized Hough Transform, feature exist In choosing peak-peak point methods in the space Hough in the step D includes:
Step E1: the selected point X in edge image in inspection figurer=(xr,yr), edge image boundary selected point X=(x, Y), point X is calculated at this timerPhasor difference r:r=X between Xr-X。
Step E2: set the angle of r and x-axis asPoint XrTo boundary point X distance be r, and to the boundary of boundary point X be orientated θ into Row calculates:
The possible value range of θ is divided into discrete k kind state i Δ θ, i=1,2,3...k and is denoted as θk=k Δ θ, wherein Δ θ is fixed Adopted θkDirectioin parameter angle step;
Step E3: the reference point locations opening relationships look-up table R determining to all convenient points, if the point on a zone boundary (x, y) all has parameter r θk, then r θkFor the shape feature of the boundary curve;
Step E4: being respectively calculated each point on zone boundary, finds out { A (xr,yr) in maximum value r*:
7. the inspection figure components target identification method according to claim 1 based on Generalized Hough Transform, feature exist In in the step E, when choosing reference coordinate in inspection image, correspondence mappings are the upper left angle point of template image Coordinate.
CN201811612262.8A 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation Active CN109815822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811612262.8A CN109815822B (en) 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811612262.8A CN109815822B (en) 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation

Publications (2)

Publication Number Publication Date
CN109815822A true CN109815822A (en) 2019-05-28
CN109815822B CN109815822B (en) 2024-05-28

Family

ID=66602538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811612262.8A Active CN109815822B (en) 2018-12-27 2018-12-27 Patrol diagram part target identification method based on generalized Hough transformation

Country Status (1)

Country Link
CN (1) CN109815822B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412609A (en) * 2019-07-11 2019-11-05 郑州航空工业管理学院 A kind of multi-pulse laser radar target detection method
CN112435223A (en) * 2020-11-11 2021-03-02 马鞍山市瀚海云星科技有限责任公司 Target detection method, device and storage medium
CN113269767A (en) * 2021-06-07 2021-08-17 中电科机器人有限公司 Batch part feature detection method, system, medium and equipment based on machine vision
CN114549438A (en) * 2022-02-10 2022-05-27 浙江大华技术股份有限公司 Reaction kettle buckle detection method and related device
CN116051629A (en) * 2023-02-22 2023-05-02 常熟理工学院 Autonomous navigation robot-oriented high-precision visual positioning method
WO2023124233A1 (en) * 2021-12-31 2023-07-06 杭州海康机器人股份有限公司 Circle searching method and apparatus, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130022280A1 (en) * 2011-07-19 2013-01-24 Fuji Xerox Co., Ltd. Methods for improving image search in large-scale databases
CN103456005A (en) * 2013-08-01 2013-12-18 华中科技大学 Method for matching generalized Hough transform image based on local invariant geometrical characteristics
CN104778707A (en) * 2015-04-22 2015-07-15 福州大学 Electrolytic capacitor detecting method for improving general Hough transform
CN106683075A (en) * 2016-11-22 2017-05-17 广东工业大学 Power transmission line tower cross arm bolt defect detection method
CN108550165A (en) * 2018-03-18 2018-09-18 哈尔滨工程大学 A kind of image matching method based on local invariant feature

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130022280A1 (en) * 2011-07-19 2013-01-24 Fuji Xerox Co., Ltd. Methods for improving image search in large-scale databases
CN103456005A (en) * 2013-08-01 2013-12-18 华中科技大学 Method for matching generalized Hough transform image based on local invariant geometrical characteristics
CN104778707A (en) * 2015-04-22 2015-07-15 福州大学 Electrolytic capacitor detecting method for improving general Hough transform
CN106683075A (en) * 2016-11-22 2017-05-17 广东工业大学 Power transmission line tower cross arm bolt defect detection method
CN108550165A (en) * 2018-03-18 2018-09-18 哈尔滨工程大学 A kind of image matching method based on local invariant feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIANG LI等: "Image matching under generalized hough transform", 《IADIS INTERNATIONAL CONFERENCE ON APPLIED COMPUTING 2005》, 31 December 2005 (2005-12-31) *
靳艳红: "基于改进 Canny算法的图像边缘检测的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》, pages 18 - 32 *
颜孙溯: "基于机器视觉的PCB板有极性电子元件检测算法研究", pages 9 - 70 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412609A (en) * 2019-07-11 2019-11-05 郑州航空工业管理学院 A kind of multi-pulse laser radar target detection method
CN112435223A (en) * 2020-11-11 2021-03-02 马鞍山市瀚海云星科技有限责任公司 Target detection method, device and storage medium
CN112435223B (en) * 2020-11-11 2021-11-23 马鞍山市瀚海云星科技有限责任公司 Target detection method, device and storage medium
CN113269767A (en) * 2021-06-07 2021-08-17 中电科机器人有限公司 Batch part feature detection method, system, medium and equipment based on machine vision
WO2023124233A1 (en) * 2021-12-31 2023-07-06 杭州海康机器人股份有限公司 Circle searching method and apparatus, electronic device and storage medium
CN114549438A (en) * 2022-02-10 2022-05-27 浙江大华技术股份有限公司 Reaction kettle buckle detection method and related device
CN116051629A (en) * 2023-02-22 2023-05-02 常熟理工学院 Autonomous navigation robot-oriented high-precision visual positioning method
CN116051629B (en) * 2023-02-22 2023-11-07 常熟理工学院 Autonomous navigation robot-oriented high-precision visual positioning method

Also Published As

Publication number Publication date
CN109815822B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN109816674A (en) Registration figure edge extracting method based on Canny operator
CN109815822A (en) Inspection figure components target identification method based on Generalized Hough Transform
WO2021138995A1 (en) Fully automatic detection method for checkerboard corners
CN109727239A (en) Based on SURF feature to the method for registering of inspection figure and reference map
CN108921865B (en) Anti-interference sub-pixel straight line fitting method
Shan et al. A stereovision-based crack width detection approach for concrete surface assessment
CN105447512B (en) A kind of detection method and device for the beauty defect that essence slightly combines
CN105913415B (en) A kind of image sub-pixel edge extracting method with extensive adaptability
Tuytelaars et al. Matching widely separated views based on affine invariant regions
CN107993258B (en) Image registration method and device
CN107025648A (en) A kind of board failure infrared image automatic testing method
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN111738320B (en) Shielded workpiece identification method based on template matching
CN105023265A (en) Checkerboard angular point automatic detection method under fish-eye lens
WO2021000948A1 (en) Counterweight weight detection method and system, and acquisition method and system, and crane
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN108921858A (en) A kind of recognition methods of automatic detection lifting lug position
CN108966500A (en) The pcb board of view-based access control model tracking is secondary and multiple accurate drilling method
CN113538583A (en) Method for accurately positioning position of workpiece on machine tool and vision system
CN110807354B (en) Industrial assembly line product counting method
CN108269264B (en) Denoising and fractal method of bean kernel image
KR101284252B1 (en) Curvature Field-based Corner Detection
CN115100153A (en) Binocular matching-based in-pipe detection method and device, electronic equipment and medium
Rashwan et al. Towards multi-scale feature detection repeatable over intensity and depth images
Cai et al. Automatic curve selection for lens distortion correction using Hough transform energy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant