CN113295617A - Multi-target offset detection method without reference point - Google Patents

Multi-target offset detection method without reference point Download PDF

Info

Publication number
CN113295617A
CN113295617A CN202110542447.1A CN202110542447A CN113295617A CN 113295617 A CN113295617 A CN 113295617A CN 202110542447 A CN202110542447 A CN 202110542447A CN 113295617 A CN113295617 A CN 113295617A
Authority
CN
China
Prior art keywords
product
detection
target
detected
position parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110542447.1A
Other languages
Chinese (zh)
Other versions
CN113295617B (en
Inventor
罗忠辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huiju Intelligent Technology Co ltd
Original Assignee
Guangzhou Huiju Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huiju Intelligent Technology Co ltd filed Critical Guangzhou Huiju Intelligent Technology Co ltd
Priority to CN202110542447.1A priority Critical patent/CN113295617B/en
Publication of CN113295617A publication Critical patent/CN113295617A/en
Application granted granted Critical
Publication of CN113295617B publication Critical patent/CN113295617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • G01N2021/0106General arrangement of respective parts
    • G01N2021/0112Apparatus in one mechanical, optical or electronic block
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

A multi-target offset detection method without reference points comprises the following steps: s1, taking the qualified product as a sample, shooting the sample to obtain a template image, and respectively obtaining the position parameters of each detection target in the sample from the template image as the standard position parameters of each detection target meeting the detection requirements; s2, shooting the product to be detected to obtain a product image, respectively obtaining position parameters of each detection target in the product to be detected from the product image as actual position parameters, comparing the actual position parameters with standard position parameters, judging the product to be detected with the comparison result exceeding the threshold value as a defective product which does not meet the detection requirement, and judging the product to be detected with the comparison result not exceeding the threshold value as a qualified product which meets the detection requirement. The invention can implement the offset detection independent of the reference point when the reference point which can be accurately identified by vision can not be found on the product and the jig, thereby meeting the detection requirement.

Description

Multi-target offset detection method without reference point
Technical Field
The invention relates to the technical field of computer vision, in particular to a multi-target offset detection method without a reference point.
Background
In the application of computer vision, the defect detection of products occupies an important position, and the product quality can be effectively improved, the cost is reduced, and the user satisfaction is improved.
Specifically, for the detection of the missing and offset of product parts in industrial production, such as an earphone charging bin, the missing (missing sticking) and offset (sticking skewness) of a plurality of white label papers on a charging interface need to be detected through computer vision, and whether the missing and offset exist in each label needs to be detected. The conventional visual inspection method at present needs two reference points which are accurate in position and can be stably detected through vision, and all other detection points are identified by taking the reference points as reference bases.
Particularly, when a reference point which can be accurately identified by vision cannot be found on a product or a jig, an offset detection method which does not depend on the reference point is needed, and the detection requirement is met.
Disclosure of Invention
In order to make up for the defects in the prior art, the invention provides a multi-target offset detection method without reference points, which is used for providing offset detection independent of the reference points for a plurality of detection targets aiming at the situation that the reference points which can be accurately identified visually cannot be found on products and jigs.
In order to achieve the above object, the present invention adopts the following technical solutions.
A multi-target offset detection method without reference points comprises the following steps:
s1, taking the qualified product as a sample, shooting the sample to obtain a template image, and respectively obtaining the position parameters of each detection target in the sample from the template image as the standard position parameters of each detection target meeting the detection requirements;
s2, shooting the product to be detected to obtain a product image, respectively obtaining position parameters of each detection target in the product to be detected from the product image as actual position parameters, comparing the actual position parameters with standard position parameters, judging the product to be detected with the comparison result exceeding the threshold value as a defective product which does not meet the detection requirement, and judging the product to be detected with the comparison result not exceeding the threshold value as a qualified product which meets the detection requirement.
The method is applied to the offset detection scene when a plurality of targets have no accurate reference points, and is mainly characterized in that the offset detection can be still carried out on parts on a product without selecting the reference points. The method is limited in that a plurality of detection targets are needed, if only within 3 detection targets are needed, the method cannot be used, otherwise, qualified targets are more than unqualified targets in the detection targets, and otherwise, the expected effect cannot be achieved.
Compared with the prior art, the invention has the beneficial effects that:
for a product with more than 3 detection targets, when the qualification rate of the detection targets is higher than 50%, offset detection independent of the reference points can be implemented when the reference points which can be accurately identified visually cannot be found on the product and the jig, and the detection requirements are met.
The present invention will be further described with reference to the drawings and the detailed description.
Drawings
FIG. 1 is a schematic diagram of one detection example of the present invention.
Detailed Description
A multi-target offset detection method without reference points comprises the following steps:
s1, taking the qualified product as a sample, shooting the sample to obtain a template image, and respectively obtaining the position parameters of each detection target in the sample from the template image as the standard position parameters of each detection target meeting the detection requirements;
s2, shooting the product to be detected to obtain a product image, respectively obtaining position parameters of each detection target in the product to be detected from the product image as actual position parameters, comparing the actual position parameters with standard position parameters, judging the product to be detected with the comparison result exceeding the threshold value as a defective product which does not meet the detection requirement, and judging the product to be detected with the comparison result not exceeding the threshold value as a qualified product which meets the detection requirement.
Specifically, the position parameters of the detection target include coordinates and an angle.
Further, step S2 includes the following sub-steps:
s21, determining the optimal mapping relation from each detection target in the template image to each detection target in the product image;
s22, according to the standard position parameters, the expected position parameters of each detection target in the product to be detected are deduced according to the optimal mapping relation;
s23, comparing the expected position parameters and the to-be-detected position parameters of each detection target in the to-be-detected product, judging the to-be-detected product with the comparison result exceeding the threshold value as a defective product which does not meet the detection requirement, and judging the to-be-detected product with the comparison result not exceeding the threshold value as a qualified product which meets the detection requirement.
Preferably, when the product image is composed of a plurality of different views, the step S2 includes the following sub-steps preceding S21:
s20-converting the coordinates of all the detection targets in a plurality of different fields of view into a unified coordinate system.
Preferably, the number of the detection targets is N, and the method for determining the optimal mapping relationship from each detection target in the template image to each detection target in the product image is as follows:
s211-get the 1 st detection target T in the product image1The key points on the template image correspond to the key points on the corresponding detection target in the template image to form a group of mapping relations;
s212-obtain the key point on the corresponding detection target in the template image to the 1 st detection target T in the product image1Mapping of keypoints on Hi(ii) a Mapping HiAffine transformation, perspective transformation or two-point calculation translation and rotation matrixes can be used, and a calculation method is selected according to actual needs;
s213-adopt the mapping HiFor other detected objects j (j) in the template image>1 ,j<= N) to obtain a coordinate P of a detection target j in the product imagejThe expected coordinates P with differencej’;
S214-use weight evaluation in mapping HiDegree of positional accuracy Score of other detection targets in the lower product imagei
S215, respectively taking the 2 nd to the N th detection targets in the product image, and sequentially repeating the steps S211 to S214, thereby calculating the Score1To ScoreNScore of total N1To ScoreNThe maximum value of (1) as the optimum value ScorebestAnd corresponding HiAs the best mapping Hbest
In the above embodiment, specifically:
(1) first, get the 1 st actual target T1Forming a group of mapping relations by the key points (such as 4 corner points of the minimum external rectangle of the target) and the corresponding key points of the corresponding target in the template;
(2) calculating the actual target T from the key points of the template1Key point mapping Hi(affine transformation, perspective transformation or two-point calculation of translation and rotation matrices can be used, and the calculation method needs to be selected in view of actual needs);
wherein, the actual solution equation is A X = B, A is the target coordinate in the template, X is the mapping H which needs to be solvediAnd B is the actual target point T1Because the positions of the 4 vertexes of the template target and the actual target are known, 4 sets of equations are obtained correspondingly, and then X (namely T) is obtainedi) Each parameter of (a);
taking perspective transformation as an example:
Figure 883068DEST_PATH_IMAGE002
thus solving the mapping matrix:
Figure DEST_PATH_IMAGE003
(3) using the obtained Hi to target j (j) other templates>1 & j<= N), a desired target point coordinate P can be obtainedj’,Pj' with the coordinate P of the target j in the actual operation chartjThere will be differences, namely:
Figure 456001DEST_PATH_IMAGE004
(4) using weight evaluation at HiThe degree of positional accuracy of the other targets under mapping:
Figure DEST_PATH_IMAGE005
Pj' As target global coordinates, it can be understood as a rectangle, CenterjIs' Pj' center point, anglejIs' Pj' integral rotation angle, similar to the CenterjIs srcjCenter point of (1), anglejIs srcjThe integral rotation angle D is an offset normalization parameter which can be set according to the maximum offset which can possibly occur in the actual situation, and the integral rotation angle A is a rotation angle normalization parameter which can take a value of 180 degrees;
wherein: ws= center offset weight, Wr= rotation angle weight, W in practical applicationsAnd WrPreferably 0.5, CenterjIs prepared by using HiPredicted Center coordinate of target j, CenterjTo detect the coordinates of the actual target j from the running chart, anglejIs prepared by using HiAngle of predicted target jjCoordinates of an actual target j are detected from the operation diagram;
(5) taking the 2 nd to the N th targets, repeating the steps (1) to (4) to calculate N scores (Score)1,Score2...ScoreN) Taking out the Scorebest= max(Scorei) And corresponding HiAs Hbest
(6) Calculate offset and rotation angle for each target:
using HbestRecalculating the expected coordinates P of all target pointsj', and is associated with the target coordinate P in the actual running chartjAnd calculating offset and rotation angle rotation, judging the product as unqualified if the offset distance or the rotation angle exceeds a set value in the template program, and prompting or giving an alarm in an interface.
Fig. 1 shows an example of a detection example of the present invention, in which 4 detection targets including a target 1, a target 2, a target 3, and a target 4 are taken as an example. The method comprises the steps of taking a qualified product as a sample, shooting the sample to obtain a template image 1, and respectively obtaining position parameters of a target 1, a target 2, a target 3 and a target 4 in the sample from the template image 1 as standard position parameters of all detection targets meeting detection requirements, wherein the position parameters comprise coordinates and angles. Shooting a product to be detected to obtain a product image 2, and respectively obtaining position parameters of a target 1 ', a target 2', a target 3 'and a target 4' in the product to be detected from the product image as actual position parameters. In the figure, for the product image 2, the actual position parameters of the target 1 ', target 2', target 3 ', target 4' are illustrated with solid line boxes, while the expected position parameters of the target 1 ', target 2', target 3 ', target 4' are illustrated with dashed line boxes.
In the detection example shown in fig. 1, the mapping from the key points on the target 1, the target 2, the target 3, and the target 4 in the template image 1 to the key points on the target 1 ', the target 2', the target 3 ', and the target 4' in the product image 2 is H1、H2、H3、H4. Using H1To H4The scores were calculated separately:
Score1=0.8;
Score2=0.75;
Score3=0.98;
Score4=0.91;
thus, Scorebest=Score3=0.98, H can be judged3Is Hbest
The actual position parameters of the target 1 ', target 2', target 3 ', target 4' are compared with the usage H3Comparing the calculated expected position parameters of each target to judge whether the position and the angle deviation exceed the limits, and judging that:
target 1: NG;
target 2: NG;
target 3: OK;
target 4: and (5) OK.
It will be clear to a person skilled in the art that the scope of protection of the present invention is not limited to details of the foregoing illustrative embodiments, and that all changes which come within the meaning and range of equivalency of the claims are intended to be embraced therein by the appended claims without departing from the spirit or essential characteristics thereof.

Claims (5)

1. A multi-target offset detection method without reference points is characterized by comprising the following steps:
s1, taking the qualified product as a sample, shooting the sample to obtain a template image, and respectively obtaining the position parameters of each detection target in the sample from the template image as the standard position parameters of each detection target meeting the detection requirements;
s2, shooting the product to be detected to obtain a product image, respectively obtaining position parameters of each detection target in the product to be detected from the product image as actual position parameters, comparing the actual position parameters with standard position parameters, judging the product to be detected with the comparison result exceeding the threshold value as a defective product which does not meet the detection requirement, and judging the product to be detected with the comparison result not exceeding the threshold value as a qualified product which meets the detection requirement.
2. The method of claim 1, wherein the position parameters of the targets include coordinates and angles.
3. The method for detecting multiple target offsets without reference points according to claim 1 or 2, characterized in that, in step S2, it includes the following sub-steps:
s21, determining the optimal mapping relation from each detection target in the template image to each detection target in the product image;
s22, according to the standard position parameters, the expected position parameters of each detection target in the product to be detected are deduced according to the optimal mapping relation;
s23, comparing the expected position parameters and the to-be-detected position parameters of each detection target in the to-be-detected product, judging the to-be-detected product with the comparison result exceeding the threshold value as a defective product which does not meet the detection requirement, and judging the to-be-detected product with the comparison result not exceeding the threshold value as a qualified product which meets the detection requirement.
4. The method of claim 3, wherein when the product image is composed of a plurality of different views, the step S2 includes the following substeps preceding S21:
s20-converting the coordinates of all the detection targets in a plurality of different fields of view into a unified coordinate system.
5. The method as claimed in claim 3, wherein the number of the detected targets is N, and the optimal mapping relationship from each detected target in the template image to each detected target in the product image is determined as follows:
s211-get the 1 st detection target T in the product image1The key points on the template image correspond to the key points on the corresponding detection target in the template image to form a group of mapping relations;
s212-obtain the key point on the corresponding detection target in the template image to the 1 st detection target T in the product image1Mapping of keypoints on Hi
S213-adopt the mapping HiFor other detected objects j (j) in the template image>1 ,j<= N) to obtain a coordinate P of a detection target j in the product imagejThe expected coordinates P with differencej’;
S214-use weight evaluation in mapping HiDegree of positional accuracy Score of other detection targets in the lower product imagei
S215, respectively taking the 2 nd to the N th detection targets in the product image, and sequentially repeating the steps S211 to S214, thereby calculating the Score1To ScoreNScore of total N1To ScoreNThe maximum value of (1) as the optimum value ScorebestAnd corresponding HiAs the best mapping Hbest
CN202110542447.1A 2021-05-18 2021-05-18 Multi-target offset detection method without reference point Active CN113295617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110542447.1A CN113295617B (en) 2021-05-18 2021-05-18 Multi-target offset detection method without reference point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110542447.1A CN113295617B (en) 2021-05-18 2021-05-18 Multi-target offset detection method without reference point

Publications (2)

Publication Number Publication Date
CN113295617A true CN113295617A (en) 2021-08-24
CN113295617B CN113295617B (en) 2022-11-25

Family

ID=77322666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110542447.1A Active CN113295617B (en) 2021-05-18 2021-05-18 Multi-target offset detection method without reference point

Country Status (1)

Country Link
CN (1) CN113295617B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036351A (en) * 2023-10-09 2023-11-10 合肥安迅精密技术有限公司 Element defect detection method and system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010243597A (en) * 2009-04-01 2010-10-28 Sony Corp Device and method for presenting biological image, program, and biological image presentation system
CN108469438A (en) * 2018-03-20 2018-08-31 东莞市美盈森环保科技有限公司 A kind of printed matter detection method, device, equipment and storage medium
CN110596121A (en) * 2019-09-12 2019-12-20 南京旷云科技有限公司 Keyboard appearance detection method and device and electronic system
CN111208147A (en) * 2020-01-13 2020-05-29 普联技术有限公司 Stitch detection method, device and system
CN111474184A (en) * 2020-04-17 2020-07-31 河海大学常州校区 AOI character defect detection method and device based on industrial machine vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010243597A (en) * 2009-04-01 2010-10-28 Sony Corp Device and method for presenting biological image, program, and biological image presentation system
CN108469438A (en) * 2018-03-20 2018-08-31 东莞市美盈森环保科技有限公司 A kind of printed matter detection method, device, equipment and storage medium
CN110596121A (en) * 2019-09-12 2019-12-20 南京旷云科技有限公司 Keyboard appearance detection method and device and electronic system
CN111208147A (en) * 2020-01-13 2020-05-29 普联技术有限公司 Stitch detection method, device and system
CN111474184A (en) * 2020-04-17 2020-07-31 河海大学常州校区 AOI character defect detection method and device based on industrial machine vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036351A (en) * 2023-10-09 2023-11-10 合肥安迅精密技术有限公司 Element defect detection method and system and storage medium

Also Published As

Publication number Publication date
CN113295617B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
CN108470356B (en) Target object rapid ranging method based on binocular vision
US10854015B2 (en) Real-time quality control during manufacturing using augmented reality
CN110246123B (en) Tile paving regularity detection method based on machine vision
CN113295617B (en) Multi-target offset detection method without reference point
CN109087294A (en) A kind of product defects detection method, system and computer readable storage medium
JPH08287252A (en) Screw hole position recognizing method
CN117649404A (en) Medicine packaging box quality detection method and system based on image data analysis
CN107729906A (en) A kind of inspection point ammeter technique for partitioning based on intelligent robot
CN113469991B (en) Visual online detection method for laser welding spot of lithium battery tab
CN112396651B (en) Method for realizing equipment positioning through two-angle image
Lee A simple calibration approach to single view height estimation
CN106971381A (en) A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken
CN108710886B (en) Repeated image matching method based on SIFT algorithm
CN115932877A (en) Target tracking method and system with fusion of laser radar and monocular camera
Prasser et al. Probabilistic visual recognition of artificial landmarks for simultaneous localization and mapping
CN112837285B (en) Edge detection method and device for panel image
CN108830281B (en) Repeated image matching method based on local change detection and spatial weighting
CN110310239B (en) Image processing method for eliminating illumination influence based on characteristic value fitting
CN114820492A (en) Method for detecting excess on circuit board
CN115567678A (en) High-altitude parabolic monitoring method and system thereof
JP2011113177A (en) Method and program for structuring three-dimensional object model
CN104408436B (en) A kind of cooperative target recognition methods and system based on back projection
CN113591564B (en) Scene abnormal state detection method
CN117994737B (en) Monitoring alarm system and method for intelligent building site management and control platform
Le et al. A machine vision based automatic optical inspection system for detecting defects of rubber keypads of scanning machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 901, Building 4, No. 188 Kaiyuan Avenue, Huangpu District, Guangzhou City, Guangdong Province, 510000

Applicant after: Guangzhou Huiju Intelligent Technology Co.,Ltd.

Address before: 510000 303, No. 18, Nanpu Road, Huangpu District, Guangzhou City, Guangdong Province (office only)

Applicant before: Guangzhou Huiju Intelligent Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant