CN114155227B - Flexible product size detection method, device and system - Google Patents

Flexible product size detection method, device and system Download PDF

Info

Publication number
CN114155227B
CN114155227B CN202111483710.0A CN202111483710A CN114155227B CN 114155227 B CN114155227 B CN 114155227B CN 202111483710 A CN202111483710 A CN 202111483710A CN 114155227 B CN114155227 B CN 114155227B
Authority
CN
China
Prior art keywords
camera
image
target
determining
detection object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111483710.0A
Other languages
Chinese (zh)
Other versions
CN114155227A (en
Inventor
薛皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jiaqishi Technology Co ltd
Original Assignee
Suzhou Jiaqishi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jiaqishi Technology Co ltd filed Critical Suzhou Jiaqishi Technology Co ltd
Priority to CN202111483710.0A priority Critical patent/CN114155227B/en
Publication of CN114155227A publication Critical patent/CN114155227A/en
Application granted granted Critical
Publication of CN114155227B publication Critical patent/CN114155227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the technical field of visual inspection, in particular to a method, a device and a system for detecting the size of a flexible product. The detection method comprises the following steps: the first camera is used for collecting a first image, the second camera is used for collecting a second image, the first camera and the second camera are arranged oppositely, the first camera can collect an image of the bottom of the target detection object, the second camera can collect an image of the top of the target detection object, therefore, data points of the inner wall of the detection hole of the target detection object can be accurately processed based on the first image, data points of the end part of the target detection object can be processed based on the second image, and further, the first distance between the inner wall of the detection hole and the end part can be determined. The detection method provided by the application has the characteristic of high detection accuracy.

Description

Flexible product size detection method, device and system
Technical Field
The invention relates to the technical field of visual inspection, in particular to a method, a device and a system for detecting the size of a flexible product.
Background
In the manufacturing process of products, in order to ensure the quality of the products, the performance or the size of the products generally need to be detected, and the detection process plays an important role in controlling the quality of the products, and has guiding significance for the subsequent improvement of the products.
For flexible products, such as watchbands, which are used as components, the flexible products need to be assembled with other components, so that the dimension control of important structures such as assembly holes in the flexible products is very important, however, in the prior art, dimension measurement of the flexible products is performed manually, the measuring equipment is only provided with a visual equipment of a camera, the products need to be placed on an object stage of the visual equipment based on manual work, and the measurement of the relevant dimension of the holes in the products is realized, so that the defect of low detection precision is overcome.
Disclosure of Invention
The invention aims to solve the technical problem of poor detection accuracy of the relative size inside the product.
In order to solve the technical problems, the application discloses a method for detecting the size of a flexible product, which comprises the following steps:
acquiring a first image with a first camera; the first image comprises the top of the target detection object; the target detection object comprises a detection hole; the first camera is close to the top of the target detection object;
acquiring a second image with a second camera; the second image comprises the bottom of the target detection object; the second camera is close to the bottom of the target detection object;
and determining a first distance from the detection hole to the end part of the target detection object according to the first image and the second image.
Optionally, the determining the first distance between the detection hole and the end of the target detection object according to the first image and the second image includes:
determining a first dataset of the inner wall of the detection aperture based on the first image; the first data set includes pixel coordinates of each of a plurality of first target points; each first target point belongs to the inner wall of the detection hole;
determining a second dataset for the end based on the second image; the second data set includes pixel coordinates of each of a plurality of second target points; each second target point belongs to the end part;
determining a first straight line according to the first data set;
determining a second straight line according to the second data set;
determining the relative distance between the first straight line and the second straight line; the relative distance is determined as a first distance of the detection hole from the end.
Optionally, the determining the first data set of the inner wall of the detection hole based on the first image includes:
gray processing is carried out on the first image to obtain a first target bright area and a first target dark area; the gray value of the first target bright area is smaller than or equal to a first preset gray value; the gray value of the first target dark area is larger than the first preset gray value;
Determining a first target area located between the first target bright area and the first target dark area;
and determining a plurality of first target points of the first target area to obtain the first data set.
Optionally, the determining the second data set of the end portion based on the second image includes:
gray processing is carried out on the second image to obtain a second target bright area and a second target dark area; the gray value of the second target bright area is smaller than or equal to a second preset gray value; the gray value of the second target dark area is larger than the second preset gray value; the second preset gray value is larger than or equal to the first preset gray value;
determining a second target region located between the second target bright region and the second target dark region;
and determining a plurality of second target points of the second target area to obtain the second data set.
Optionally, the axis of the first camera is coincident with or parallel to the axis of the second camera;
after determining the first distance between the detection hole and the end of the target detection object according to the first image and the second image, the method further comprises:
acquiring a third image by using a third camera; the third image comprises the top of the target detection object; the target detection object comprises a detection hole; a preset interval exists between the third camera and the first camera;
Acquiring a fourth image by using a fourth camera; the fourth image comprises the bottom of the target detection object; the third camera and the fourth camera are arranged oppositely; the axis of the third camera and the axis of the fourth camera form a preset included angle;
determining a second distance from the detection hole to the end part of the target detection object according to the third image and the fourth image;
determining a comparison value according to the first distance and the second distance;
if the comparison value is smaller than or equal to a preset threshold value, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
Optionally, a metal piece is arranged in the detection hole; the metal piece is connected with the detection hole through adhesive; the adhesive is positioned on the inner wall of the metal piece;
the axis of the first camera and the axis of the second camera form a preset included angle;
after determining the first distance between the detection hole and the end of the target detection object according to the first image and the second image, the method further comprises:
determining the first distance as a target distance value;
if the target distance value meets a preset threshold range, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
The application also discloses in another aspect a detection device of flexible product size, it includes:
the first acquisition module is used for acquiring a first image by using a first camera; the first image comprises the top of the target detection object; the target detection object comprises a detection hole; the first camera is close to the top of the target detection object;
the second acquisition module is used for acquiring a second image by using a second camera; the second image comprises the bottom of the target detection object; the second camera is close to the bottom of the target detection object;
and the determining module is used for determining a first distance between the detection hole and the end part of the target detection object according to the first image and the second image.
The application also discloses a detection system of the size of the flexible product, which comprises a camera component and a processing unit;
the camera assembly includes a first camera and a second camera; the first camera is close to the top of the target detection object; the second camera is close to the bottom of the target detection object;
the first camera is used for acquiring a first image and sending the first image to the processing unit; the first image comprises the top of the target detection object; the target detection object comprises a detection hole;
The second camera is used for acquiring a second image and sending the second image to the processing unit; the second image comprises the bottom of the target detection object; the first camera and the second camera are oppositely arranged;
the processing unit is electrically connected with the first camera and the second camera respectively; the processing unit is used for determining a first distance between the detection hole and the end part of the target detection object according to the first image and the second image.
In another aspect, the application also discloses a computer device, which includes a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the above detection method.
In another aspect, the present application also discloses a computer storage medium, where at least one instruction or at least one program is stored, where the at least one instruction or at least one program is loaded and executed by a processor to implement the above detection method.
By adopting the technical scheme, the detection method for the size of the flexible product has the following beneficial effects:
The method comprises the steps that a first camera is used for collecting a first image, a second camera is used for collecting a second image, the first camera and the second camera are arranged oppositely, the first camera can collect an image of the bottom of a target detection object, the second camera can collect an image of the top of the target detection object, so that data points on the inner wall of a detection hole of the target detection object can be accurately processed based on the first image, data points on the end part of the target detection object can be processed based on the second image, and further a first distance between the inner wall of the detection hole and the end part can be determined; in the prior art, the size can be detected only based on one camera, and the target position can not be clearly detected only based on one image, so that the detection accuracy is poor.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an alternative application scenario diagram of the present application;
FIG. 2 is a flow chart of a first alternative detection method according to the present application;
FIG. 3 is a schematic view of the structure of an alternative flexible product of the present application;
FIG. 4 is a schematic view of a partial structure of an alternative flexible product of the present application;
FIG. 5 is a gray scale pictorial representation of an alternative first image of the present application;
FIG. 6 is a schematic view of a first alternative camera assembly of the present application;
FIG. 7 is a gray scale pictorial representation of an alternative second image of the present application;
FIG. 8 is a schematic view of an alternative calibration plate configuration of the present application;
FIG. 9a is a schematic view showing the positional relationship between an optional metal member and adhesive;
FIG. 9b is a schematic view showing the positional relationship between an alternative metal member and adhesive;
FIG. 10 is a flow chart of a second alternative detection method of the present application;
FIG. 11 is a flow chart of a third alternative detection method according to the present application;
FIG. 12 is a schematic view of a second alternative camera assembly of the present application;
FIG. 13 is a gray scale pictorial view of another alternative first image of the present application;
FIG. 14 is a gray scale pictorial view of another alternative second image of the present application;
FIG. 15 is a flow chart of a fourth alternative detection method of the present application;
FIG. 16 is a schematic structural view of a third alternative camera assembly of the present application;
FIG. 17 is a schematic view of an alternative flexible product size detection device;
fig. 18 is a block diagram of the hardware architecture of a server for an alternative detection method of the present application.
The following supplementary explanation is given to the accompanying drawings:
1-a camera assembly; 11-a first camera; 12-a second camera; 13-a fixed structure; 14-a first camera assembly; 15-a second camera assembly; 151-a third camera; 152-fourth camera; a 2-processing unit; a 3-linking region; 31-adhesive glue; 4-a detection hole; 5-metal piece; 6-a first light source generator; 7-a second light source generator.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, fig. 1 is an alternative application scenario diagram of the present application. The scene comprises a camera assembly 1 and a processing unit 2; the camera assembly 1 comprises a first camera 11 and a second camera 12, wherein the lens of the first camera 11 is opposite to the lens of the second camera 12, namely, the first camera is close to the top of the target detection object; the second camera is close to the bottom of the target detection object; the product is located in a preset area between the first camera 11 and the second camera 12, the first camera 11 is used for acquiring a first image and sending the first image to the processing unit 2; the first image comprises the top of the target detection object; the target detection object includes a detection hole 4; the second camera 12 is configured to acquire a second image and send the second image to the processing unit 2; the second image comprises the bottom of the target detection object; the processing unit 2 is electrically connected to the first camera 11 and the second camera 12, respectively; the processing unit 2 is configured to determine a first distance of the detection hole 4 from the end of the target detection object based on the first image and the second image. Thereby, accurate measurement of the first distance of the target detection object can be realized.
Alternatively, the processing unit 2 may be a server or a terminal.
Optionally, the terminal may be a desktop computer, a notebook computer, a mobile phone, a tablet computer, etc.
The terminal may include a display, a memory device, and a processor coupled by a data bus. The display screen is used for virtual images of the equipment to be monitored and connection relations among all sub-equipment in the equipment to be monitored, and can be a touch screen of a mobile phone or a tablet personal computer. The storage device is used for storing program codes, data materials and the like of the shooting device, and the storage device can be a memory of a terminal, or can be a storage device such as a smart media card (smart media card), a secure digital card (secure digital card), a flash memory card (flash card) and the like. The processor may be a single-core or multi-core processor.
In the following, a specific embodiment of a detection method according to the present application is described, and fig. 2 is a schematic flow chart of a first alternative detection method according to the present application, and the present specification provides method operation steps according to an embodiment or a flowchart, but may include more or fewer operation steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
S201: acquiring a first image with a first camera 11; the first image comprises the top of the target detection object; the target detection object includes a detection hole 4.
Alternatively, the object of target detection may be part of a flexible product, for example, the connection area 3 hereinafter; the flexible product may be a watchband, the watchband may be a structure as shown in fig. 3, the watchband main body is made of rubber, one end of the watchband is a connection area 3 for being connected with a dial, in general, in order to ensure connection reliability and convenience, a material with certain hardness such as metal is arranged in the connection area 3, the connection area 3 of the watchband provided in the embodiment of the present application includes a detection hole 4, a metal piece 5 is arranged in the detection hole 4, the metal piece 5 is used for being connected with the dial, and the metal piece 5 is fixedly connected with the detection hole 4 through the connection adhesive; as can be seen from fig. 4, the straight line corresponding to the lower side wall of the detection hole 4 is denoted by L1, the straight line corresponding to the end of the wristband is denoted by L2, and when the distance between the two meets the preset distance condition, the size of the wristband is determined to be acceptable, otherwise, the wristband is determined to be unacceptable.
Referring to fig. 5, fig. 5 is a gray scale pictorial view of an alternative first image of the present application. L1 may then be determined based on the gray scale map of the first image.
S202: acquiring a second image with a second camera 12; the second image comprises the bottom of the target detection object; the first camera 11 and the second camera 12 are disposed opposite to each other.
Alternatively, referring to fig. 6, fig. 6 is a schematic structural diagram of a first alternative camera assembly according to the present application. The above-mentioned camera assembly 1 further includes a fixing structure 13, the first camera 11 and the second camera 12 are both disposed on the fixing structure 13, and the first camera 11 is opposite to the lens of the second camera 12, and a first preset distance exists between the first camera 11 and the second camera for placing the target detection object. Two first light source generators 6 and second light source generators 7 which are arranged at intervals are arranged below the first camera 11, and the first light source generator 6 and the second light source generator 7 are connected with a fixed structure 13; the first light source generator 6 is in a ring structure, and a through hole in the middle of the first light source generator 6 corresponds to the lens of the first camera 11, so that light of the first camera 11 can pass through the through hole to irradiate on a target detection object; the second light source generator 7 and the fixed structure 13 have a certain included angle, namely, the emitted light inclined to the target detection object by a certain angle can be made to strike the target detection object, so that the first camera 11 and the second camera 12 can acquire a first image and a second image with brightness change, and the detection of lines is realized.
Referring to fig. 7, fig. 7 is a gray scale pictorial view of an alternative second image of the present application. L2 may then be determined based on the gray scale map of the second image.
S203: and determining a first distance between the detection hole 4 and the end part of the target detection object according to the first image and the second image.
Optionally, in order to simplify the measurement process, the first image and the second image do not need to be fused to realize the detection of the size of the target detection object. Before the first camera 11 and the second camera 12 are used for acquisition, the first camera 11 and the second camera 12 need to be calibrated, and the coordinates of the first camera 11 and the second camera 12 are unified; in a possible embodiment, the two cameras may be calibrated based on a calibration board, where the calibration board is a calibration board provided with coordinate values, and referring to fig. 8, fig. 8 is a schematic structural diagram of an alternative calibration board of the present application. The calibration plate includes a plurality of reference points, and the calibration plate may be a checkerboard. Shooting the calibration plate by the first camera 11, thereby obtaining an image containing a plurality of reference points; acquiring coordinates of any one target reference point in a plurality of reference points; acquiring a relative position relation between a target reference point and an origin of a pixel coordinate system of the first camera 11, and determining a first mapping relation between the pixel coordinate of the first camera 11 and the world coordinate of the calibration plate according to the position relation between the coordinate of the target reference point and the origin of the pixel coordinate system of the first camera 11; similarly, the calibration process of the second camera 12 is the same as that of the first camera 11, and finally a second mapping relationship between the pixel coordinates of the second camera 12 and the world coordinates of the calibration board can be determined; therefore, the pixel coordinates of the target points of the images acquired by the first camera 11 and the second camera 12 are correspondingly converted into world coordinates, namely, the coordinates of the target points of the images acquired by the two cameras are unified, so that linkage processing of related data in the two images is realized.
Referring to fig. 9a and 9b, fig. 9a is a schematic diagram illustrating a positional relationship between an optional metal member and adhesive according to the present application; fig. 9b is a schematic diagram showing a positional relationship between an alternative metal member and adhesive. Since the metal piece 5 is connected to the inner wall of the detection hole 4 through the adhesive 31, and the metal piece 5 has a certain thickness, during the processing, the adhesive 31 of the target detection object may be located at the top of the metal piece 5 (as shown in fig. 9 a); the second is that the adhesive glue 31 is located at the inner wall of the metal piece 5 (as in fig. 9 b); for the first case, the target detection object may be detected based on the camera assembly 1 shown in fig. 6, as shown in fig. 10, and fig. 10 is a flow chart of a second alternative detection method of the present application.
Step S203 may be specifically described as:
s2031: determining a first dataset of the inner wall of the detection aperture 4 based on the first image; the first data set includes pixel coordinates of each of a plurality of first target points; each first target point belongs to the inner wall of the detection aperture 4.
In one possible embodiment, referring to fig. 11, fig. 11 is a flow chart of a third alternative detection method of the present application. Step S2031 may be specifically illustrated as:
S1101: gray processing is carried out on the first image to obtain a first target bright area and a first target dark area; the gray value of the first target bright area is smaller than or equal to a first preset gray value; the gray value of the first target dark area is larger than the first preset gray value.
In this embodiment of the present application, when the first camera 11 starts to collect the first image, the first light source generator 6 and the second light source generator 7 are both turned on, and due to the existence of the oblique light of the second light source generator 7, a certain amount of light can be beaten on the inner wall side of the detection hole 4, based on the above-mentioned fig. 5, it can be seen that a brighter area exists on the inner wall of the detection hole 4, and the bright area is the above-mentioned first target bright area; the area above the first target bright area (i.e. the y-axis positive direction) is the first target dark area.
In this embodiment, the first image may be directly identified from dark to light according to the bottom-up (i.e. along the positive y-axis direction), and the identified second dark area is set as the first target dark area; the first identified bright area is the first target bright area, so that the first target bright area and the first target dark area are determined.
S1102: a first target region of the first target bright region and the first target dark region is determined.
S1103: and determining a plurality of first target points of the first target area to obtain the first data set.
S2032: determining a second dataset for the end based on the second image; the second data set includes pixel coordinates of each of a plurality of second target points; each second target point belongs to the end.
In one possible embodiment, step S2032 may be specifically described as:
gray processing is carried out on the second image to obtain a second target bright area and a second target dark area; the gray value of the second target bright area is smaller than or equal to a second preset gray value, and the gray value of the second target dark area is larger than the second preset gray value; the second preset gray value is larger than or equal to the first preset gray value; determining a second target area of the second target bright area and the second target dark area; and determining a plurality of second target points of the second target area to obtain the second data set.
In the embodiment of the present application, when the second camera 12 starts to acquire the second image, the first light source generator 6 is turned on, and the second light source generator 7 is turned off, so that the second camera 12 acquires the backlight image, based on the above-mentioned fig. 7, it can be seen that the area which is not blocked by the target detection object is a bright area, and the bright area is the above-mentioned second target bright area; the area below the first target bright area (i.e. the negative y-axis direction) is the second target dark area.
In this embodiment, the first image may be directly identified as the second target dark area according to the light-to-dark identification manner from top to bottom (i.e. along the negative y-axis direction); the first identified bright area is the second target bright area, so that the second target bright area and the second target dark area are determined.
S2033: a first line is determined from the first dataset.
In this embodiment, the vision pro software may be used to capture a plurality of points existing in the first target area, and screen the plurality of points, for example, 60 points are taken, and 20% of invalid points are filtered, so that a first straight line is formed by performing straight line fitting based on the remaining 48 points.
S2034: a second line is determined from the second dataset.
In this embodiment, similarly, 60 points are taken and 20% of invalid points are filtered in the same manner as the processing method of the first data set, so that a second straight line is formed by performing straight line fitting based on the remaining 48 points.
The above-mentioned points each have position coordinate information, and the position coordinates may be world coordinates already converted into calibration plates, or may be pixel coordinates of a camera, and then the pixel coordinates of the points belonging to the first straight line or the second straight line may be separately converted into world coordinates, and the above-mentioned number of points is merely an example, and may be 10, 20, 30, etc. according to actual needs, and the present invention is not limited thereto.
S2035: determining the relative distance between the first straight line and the second straight line; the relative distance is determined as the first distance of the detection aperture 4 from the end.
Alternatively, referring to fig. 5 and 7, a midpoint M1 of the first straight line L1 may be taken, and a perpendicular may be drawn to the second straight line L2 from M1, where the coordinates of M1 and the coordinates of all points on the second straight line are known, so that the first distance d1 may be determined; in order to improve the flexibility of the detection method, the midpoint M2 of the second straight line L2 may be taken, and a perpendicular line may be drawn to the first straight line L1 from M2, so as to determine the first distance d1.
As can be seen from the above-mentioned fig. 9a and 9b, since there are two cases of the metal member 5 and the adhesive 31, for the first case, the accurate measurement of the first distance can be achieved based directly on the structure of the camera assembly 1 shown in fig. 6, that is, the axis of the first camera 11 is coincident with or parallel to the axis of the second camera 12. However, in the second case, since the adhesive 31 exists on the inner wall of the piece, even if there is oblique light, the inner wall of the metal piece 5 cannot be accurately detected, and therefore, referring to fig. 12, fig. 12 is a schematic structural view of the second alternative camera module 1 of the present application. The axis of the first camera 11 and the axis of the second camera 12 form a preset included angle; the axis of the first camera 11 is a preset distance from the fixed structure 13, the axis of the second camera 12 is parallel to the fixed structure 13, the first light source generator 6 is arranged on the first camera 11, the first light source generator 6 is of an annular structure, and a through hole in the middle corresponds to a lens of the first camera 11, so that light of the first camera 11 can pass through the through hole to irradiate on a target detection object.
The specific detection process in the steps S2031 and S2032 may also be: when the first camera 11 collects the first image, the first light source generator 6 is turned on, and as the first camera 11 is obliquely arranged, the inner wall of the metal piece 5 can be photographed, and referring to fig. 13, fig. 13 is a gray scale schematic view of another alternative first image in the present application. The first image can be directly identified from dark to light according to the mode of identification from top to bottom (namely along the negative direction of the y axis), and the identified first dark area is set as a first target dark area; the identified second bright area is the first target bright area, so that the first target bright area and the first target dark area are determined, and a first straight line corresponding to the first target area determined by the first target bright area and the first target dark area is marked as L3.
In an embodiment of the present application, referring to fig. 14, fig. 14 is a gray scale pictorial view of another alternative second image of the present application. When the second camera 12 starts to acquire the second image, the first light source generator 6 is turned on, so that the second camera 12 acquires the backlight image, and as can be seen from the above-mentioned fig. 14, a bright area, that is, the above-mentioned first target bright area, can be shot; the area below the first target bright area (i.e., the negative y-axis direction) is the first target dark area.
In this embodiment, the first image may be directly identified in a manner of from dark to light (i.e., along the positive y-axis direction), where the identified first bright area is defined as a second target bright area (not shown in fig. 14, an area not covered by the end portion of the object to be detected), and the first dark area is defined as a second target dark area; thereby determining the second target bright area and the second target dark area; and a second straight line corresponding to the second target area determined based on the second target bright area and the second target dark area is denoted as L4.
It should be noted that, the y-axis in the above illustration is only used to indicate the direction in the corresponding drawing, and the references indicated by the y-axis in different drawings may be different.
Optionally, after step 203, the detection method further includes: determining the first distance as a target distance value; if the target distance value meets a preset threshold range, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified; so as to realize the judgment of the qualification condition of the size of the target detection object, and facilitate the subsequent process to carry out corresponding operation based on the judgment result.
Optionally, the first distance d1 needs to satisfy the following condition: 2.8< d1<3.1 in millimeters; the size detection result is determined to be qualified.
Since in the actual product detection process, a plurality of products may be detected together, the processing condition of each product may be different, and there may be a product with a first condition and a product with a second condition in the plurality of products, so that detection steps are saved, that is, the product does not need to be first analyzed for the position of the adhesive 31, so that the detection efficiency and the detection accuracy are improved, and in another possible embodiment, referring to fig. 15, fig. 15 is a flow chart of a fourth alternative detection method of the present application. After the step S203, the detection method further includes:
s204: acquiring a third image with a third camera 151; the third image comprises the top of the target detection object; the target detection object includes a detection hole 4 with a preset interval from the first camera 11 in the third camera 151.
S205: acquiring a fourth image with a fourth camera 152; the fourth image includes the bottom of the target detection object.
In the present embodiment, referring to fig. 16, fig. 16 is a schematic structural view of a third alternative camera assembly 1 according to the present application. The camera assembly 1 comprises a first camera assembly 14 and a second camera assembly 15; wherein the first camera assembly 14 comprises a first camera 11 and a second camera 12, the second camera assembly 15 comprises a third camera 151 and a fourth camera 152, the axis of the first camera 11 and the axis of the second camera 12 are perpendicular to the ground and parallel to the fixed structure 13 of the first camera assembly 14; the axis of the third camera 151 forms a preset angle with the fixed structure 13 of the second camera assembly 15; the axis of the fourth camera 152 is perpendicular to the ground; the structure of the first camera assembly 14 may be referred to as fig. 6, and the structure of the second camera assembly 15 may be referred to as fig. 12.
S206: and determining a second distance from the detection hole 4 to the end of the target detection object according to the third image and the fourth image.
In the embodiment of the present application, the first distance d1 is the distance determined by L1 and L2 in fig. 5 and 7, and the second distance d2 is the distance determined by L3 and L4 in fig. 13 and 14; the gray scale of the third image may be the image shown in fig. 13, and the gray scale of the fourth image may be the image shown in fig. 14.
S207: and determining a comparison value according to the first distance and the second distance.
Alternatively, the comparison value may be a ratio between the first distance and the second distance, or may be a difference between the first distance and the second distance.
S208: if the comparison value is smaller than or equal to a preset threshold value, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
Optionally, in order to further improve the accuracy of the detection, in this embodiment, the first distance d1 and the second distance d2 need to satisfy the following conditions to determine that the size detection result of the flexible product is acceptable; the conditions to be satisfied are: -0.04< d2-d1, and d1<3.06; the units are all millimeters.
Referring to fig. 17, fig. 17 is a schematic structural view of an alternative flexible product size detection device according to the present application. The application also discloses in another aspect a detection device of flexible product size, it includes:
a first acquisition module 1701 for acquiring a first image using the first camera 11; the first image comprises the top of the target detection object; the target detection object includes a detection hole 4;
a second acquisition module 1702 for acquiring a second image with the second camera 12; the second image comprises the bottom of the target detection object; the first camera 11 and the second camera 12 are disposed opposite to each other;
a determining module 1703, configured to determine a first distance between the detection hole 4 and the end of the target detection object according to the first image and the second image.
In a possible embodiment, the determining module includes a first sub-determining module, a second sub-determining module, and a third sub-determining module; the first sub-determining module is used for determining a first data set of the inner wall of the detection hole 4 based on the first image; the first data set includes pixel coordinates of each of a plurality of first target points; each first target point belongs to the inner wall of the detection hole 4; determining a second dataset for the end based on the second image; the second data set includes pixel coordinates of each of a plurality of second target points; each second target point belongs to the end part;
The second sub-determining module is used for determining a first straight line according to the first data set; determining a second straight line according to the second data set;
a third sub-determining module, configured to determine a relative distance between the first line and the second line; the relative distance is determined as the first distance of the detection aperture 4 from the end.
In a possible embodiment, the first sub-determining module is configured to perform gray-scale processing on the first image to obtain a first target bright area and a first target dark area; the gray value of the first target bright area is smaller than or equal to a first preset gray value; the gray value of the first target dark area is larger than the first preset gray value; determining a first target area of the first target bright area and the first target dark area; and determining a plurality of first target points of the first target area to obtain the first data set.
In a possible embodiment, the second sub-determining module is configured to perform gray-scale processing on the second image to obtain a second target bright area and a second target dark area; the gray value of the second target bright area is smaller than or equal to a second preset gray value; the gray value of the second target dark area is larger than the second preset gray value; the second preset gray value is larger than or equal to the first preset gray value; determining a second target area of the second target bright area and the second target dark area; and determining a plurality of second target points of the second target area to obtain the second data set.
In a possible embodiment, the axis of the first camera 11 is coincident with or parallel to the axis of the second camera 12; the determining module further includes a fourth sub-determining module for acquiring a third image using the third camera 151; the third image comprises the top of the target detection object; the target detection object includes a detection hole 4; acquiring a fourth image with a fourth camera 152; the fourth image comprises the bottom of the target detection object; the third camera 151 and the fourth camera 152 are disposed opposite to each other; an axis of the third camera 151 forms a preset included angle with an axis of the fourth camera 152; determining a second distance from the detection hole 4 to the end of the target detection object according to the third image and the fourth image; determining a comparison value according to the first distance and the second distance; if the comparison value is smaller than or equal to a preset threshold value, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
In a possible embodiment, the detection hole 4 is internally provided with a metal piece 5; the metal piece 5 is connected with the detection hole 4 through adhesive 31; the adhesive 31 is positioned on the inner wall of the metal piece 5; the axis of the first camera 11 and the axis of the second camera 12 form a preset included angle; the determining module is further used for determining the first distance as a target distance value; if the target distance value meets a preset threshold range, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
The method embodiments provided in the embodiments of the present application may be performed in a computer terminal, a server, or a similar computing device. Taking the example of running on a server, fig. 18 is a block diagram of the hardware architecture of the server for an alternative detection method of the present application. As shown in fig. 18, the server 1800 may vary considerably in configuration or performance and may include one or more central processing units (Central Processing Units, CPU) 1810 (the central processing unit 1810 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), memory 1830 for storing data, one or more storage mediums 1820 (e.g., one or more mass storage devices) for storing applications 1823 or data 1822. Wherein the memory 1830 and storage medium 1820 may be transitory or persistent. The program stored on the storage medium 1820 may include one or more modules, each of which may include a series of instruction operations in a server. Further, the central processor 1810 may be configured to communicate with a storage medium 1820 to execute a series of instruction operations on the storage medium 1820 on the server 1800. The server 1800 may also include one or more power supplies 1860, one or more wired or wireless network interfaces 1850, one or more input/output interfaces 1840, and/or one or more operating systems 1821, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The input-output interface 1840 may be used to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the server 1800. In one example, the input/output interface 1840 includes a network adapter (Network Interface Controller, NIC) that may be connected to other network devices through a base station to communicate with the internet. In one example, the input/output interface 1840 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
It will be appreciated by those skilled in the art that the configuration shown in fig. 18 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, server 1800 may also include more or fewer components than shown in fig. 18, or have a different configuration than shown in fig. 18.
Embodiments of the present application also provide an electronic device including a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, where the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by the processor to implement a detection method as described above.
Embodiments of the present application also provide a storage medium that may be disposed in a server to store at least one instruction, at least one program, a set of codes, or a set of instructions related to implementing a detection method in a method embodiment, where the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by the processor to implement the detection method described above.
Alternatively, in this embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: the foregoing sequence of the embodiments of the present application is only for describing, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to the particular embodiments of the present application.

Claims (9)

1. A method for detecting the size of a flexible product, comprising the steps of:
acquiring a first image with a first camera; the first image comprises the top of the target detection object; the target detection object comprises a detection hole; the first camera is close to the top of the target detection object;
Acquiring a second image with a second camera; the second image comprises the bottom of the target detection object; the second camera is close to the bottom of the target detection object;
determining a first distance between the detection hole and the end part of the target detection object according to the first image and the second image;
the axis of the first camera is coincident with or parallel to the axis of the second camera;
after the first distance between the detection hole and the end part of the target detection object is determined according to the first image and the second image, the method further comprises the following steps:
acquiring a third image by using a third camera; the third image comprises the top of the target detection object; the third camera is close to the top of the target detection object; a preset interval exists between the third camera and the first camera;
acquiring a fourth image by using a fourth camera; the fourth image comprises the bottom of the target detection object; the fourth camera is close to the bottom of the target detection object; the axis of the third camera and the axis of the fourth camera form a preset included angle;
determining a second distance between the detection hole and the end part of the target detection object according to the third image and the fourth image;
Determining a comparison value according to the first distance and the second distance;
if the comparison value is smaller than or equal to a preset threshold value, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
2. The method according to claim 1, wherein determining a first distance of the detection hole from an end of the target detection object from the first image and the second image includes:
determining a first dataset of an inner wall of the detection aperture based on the first image; the first data set includes pixel coordinates of each of a plurality of first target points; each first target point belongs to the inner wall of the detection hole;
determining a second dataset of the end portion based on the second image; the second data set includes pixel coordinates of each of a plurality of second target points; each second target point belongs to the end part;
determining a first straight line according to the first data set;
determining a second straight line according to the second data set;
determining the relative distance between the first straight line and the second straight line; the relative distance is determined as a first distance of the detection aperture from the end.
3. The method of detecting according to claim 2, wherein the determining a first dataset of the inner wall of the detection aperture based on the first image comprises:
gray processing is carried out on the first image to obtain a first target bright area and a first target dark area; the gray value of the first target bright area is smaller than or equal to a first preset gray value; the gray value of the first target dark area is larger than the first preset gray value;
determining a first target area located between the first target bright area and the first target dark area;
and determining a plurality of first target points of the first target area to obtain the first data set.
4. A method of detecting according to claim 3, wherein said determining a second data set of the end portion based on the second image comprises:
gray processing is carried out on the second image to obtain a second target bright area and a second target dark area; the gray value of the second target bright area is smaller than or equal to a second preset gray value; the gray value of the second target dark area is larger than the second preset gray value; the second preset gray value is larger than or equal to the first preset gray value;
Determining a second target area located between the second target bright area and the second target dark area;
and determining a plurality of second target points of the second target area to obtain the second data set.
5. The method according to claim 1, wherein a metal piece is placed in the detection hole; the metal piece is connected with the detection hole through adhesive glue; the adhesive glue is positioned on the inner wall of the metal piece;
the axis of the first camera and the axis of the second camera form a preset included angle;
after the first distance between the detection hole and the end part of the target detection object is determined according to the first image and the second image, the method further comprises the following steps:
determining the first distance as a target distance value;
if the target distance value meets a preset threshold range, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
6. A flexible product size detection device, comprising:
the first acquisition module is used for acquiring a first image by using a first camera; the first image comprises the top of the target detection object; the target detection object comprises a detection hole; the first camera is close to the top of the target detection object;
The second acquisition module is used for acquiring a second image by using a second camera; the second image comprises the bottom of the target detection object; the second camera is close to the bottom of the target detection object; the axis of the first camera is coincident with or parallel to the axis of the second camera;
a determining module, configured to determine a first distance between the detection hole and an end of the target detection object according to the first image and the second image;
the determining module comprises a fourth sub-determining module, and the fourth sub-determining module is used for acquiring a third image by using a third camera; the third image comprises the top of the target detection object; the third camera is close to the top of the target detection object; a preset interval exists between the third camera and the first camera; acquiring a fourth image by using a fourth camera; the fourth image comprises the bottom of the target detection object; the fourth camera is close to the bottom of the target detection object; the axis of the third camera and the axis of the fourth camera form a preset included angle; determining a second distance between the detection hole and the end part of the target detection object according to the third image and the fourth image; determining a comparison value according to the first distance and the second distance; if the comparison value is smaller than or equal to a preset threshold value, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
7. A flexible product size detection system, comprising a camera assembly and a processing unit;
the camera assembly includes a first camera and a second camera; the first camera is close to the top of the target detection object; the second camera is close to the bottom of the target detection object;
the first camera is used for acquiring a first image and sending the first image to the processing unit; the first image comprises the top of the target detection object; the target detection object comprises a detection hole;
the second camera is used for acquiring a second image and sending the second image to the processing unit; the second image comprises the bottom of the target detection object; the first camera and the second camera are oppositely arranged; the axis of the first camera is coincident with or parallel to the axis of the second camera;
the processing unit is electrically connected with the first camera and the second camera respectively; the processing unit is used for determining a first distance between the detection hole and the end part of the target detection object according to the first image and the second image;
after the first distance between the detection hole and the end part of the target detection object is determined according to the first image and the second image, the method further comprises the following steps:
Acquiring a third image by using a third camera; the third image comprises the top of the target detection object; the third camera is close to the top of the target detection object; a preset interval exists between the third camera and the first camera;
acquiring a fourth image by using a fourth camera; the fourth image comprises the bottom of the target detection object; the fourth camera is close to the bottom of the target detection object; the axis of the third camera and the axis of the fourth camera form a preset included angle;
determining a second distance between the detection hole and the end part of the target detection object according to the third image and the fourth image;
determining a comparison value according to the first distance and the second distance;
if the comparison value is smaller than or equal to a preset threshold value, determining that the size detection result of the target detection object is qualified; otherwise, determining that the size detection result is unqualified.
8. A computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, code set or instruction set being loaded and executed by the processor to implement the detection method of any of claims 1-5.
9. A computer storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the detection method of any of claims 1-5.
CN202111483710.0A 2021-12-07 2021-12-07 Flexible product size detection method, device and system Active CN114155227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111483710.0A CN114155227B (en) 2021-12-07 2021-12-07 Flexible product size detection method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111483710.0A CN114155227B (en) 2021-12-07 2021-12-07 Flexible product size detection method, device and system

Publications (2)

Publication Number Publication Date
CN114155227A CN114155227A (en) 2022-03-08
CN114155227B true CN114155227B (en) 2024-01-26

Family

ID=80453036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111483710.0A Active CN114155227B (en) 2021-12-07 2021-12-07 Flexible product size detection method, device and system

Country Status (1)

Country Link
CN (1) CN114155227B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862712A (en) * 2017-10-20 2018-03-30 陈宸 Sized data determines method, apparatus, storage medium and processor
CN111325769A (en) * 2018-12-13 2020-06-23 北京嘀嘀无限科技发展有限公司 Target object detection method and device
WO2020199072A1 (en) * 2019-04-01 2020-10-08 Intel Corporation Autonomous driving dataset generation with automatic object labelling methods and apparatuses
CN111932605A (en) * 2020-09-11 2020-11-13 广东韶钢松山股份有限公司 Size detection method and device, electronic equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862712A (en) * 2017-10-20 2018-03-30 陈宸 Sized data determines method, apparatus, storage medium and processor
CN111325769A (en) * 2018-12-13 2020-06-23 北京嘀嘀无限科技发展有限公司 Target object detection method and device
WO2020199072A1 (en) * 2019-04-01 2020-10-08 Intel Corporation Autonomous driving dataset generation with automatic object labelling methods and apparatuses
CN111932605A (en) * 2020-09-11 2020-11-13 广东韶钢松山股份有限公司 Size detection method and device, electronic equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
铁塔构件制孔尺寸机器视觉在线检测***设计;郝孟娟;董桂西;高立鹏;秦志英;;组合机床与自动化加工技术(02);第18-20页 *

Also Published As

Publication number Publication date
CN114155227A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN107957294B (en) Ambient light intensity detection method and device, storage medium and electronic equipment
CN106097361B (en) Defect area detection method and device
US20180018786A1 (en) Method and device for obtaining image, and recording medium thereof
CN108021161A (en) Ambient light intensity detection method, device, storage medium and electronic equipment
US10949669B2 (en) Augmented reality geolocation using image matching
CN112614085A (en) Object detection method and device and terminal equipment
CN109753425B (en) Popup window processing method and device
CN112700440B (en) Object defect detection method and device, computer equipment and storage medium
CN113344862B (en) Defect detection method, device, electronic equipment and storage medium
CN112241716B (en) Training sample generation method and device
CN112991459A (en) Camera calibration method, device, equipment and storage medium
CN112102417A (en) Method and device for determining world coordinates and external reference calibration method for vehicle-road cooperative roadside camera
CN110675154B (en) Service providing method, device, equipment and medium based on face recognition
CN109189290B (en) Click area identification method and device and computer readable storage medium
CN114155227B (en) Flexible product size detection method, device and system
CN111382701B (en) Motion capture method, motion capture device, electronic equipment and computer readable storage medium
CN108427110A (en) Distance measuring method, device and electronic equipment
CN117252837A (en) Data processing method and device for wafer test, medium and electronic equipment
CN107734324B (en) Method and system for measuring illumination uniformity of flash lamp and terminal equipment
CN109413412B (en) Method and device for testing performance of gray card, electronic equipment and storage medium
CN113050022A (en) Image positioning method and device based on rotating antenna and terminal equipment
CN113343554B (en) Arch dam underwater damage identification method, terminal equipment and storage medium
CN111294253B (en) Test data processing method and device, computer equipment and storage medium
CN107328387A (en) Angle measuring method, device and video camera
CN108898632B (en) Method and device for determining rotation angle of instrument pointer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215011 No. 2, Kunlunshan Road, high tech Zone, Suzhou, Jiangsu

Applicant after: Suzhou Jiaqishi Technology Co.,Ltd.

Address before: 215011 No. 2, Kunlunshan Road, high tech Zone, Suzhou, Jiangsu

Applicant before: SUZHOU JIAQISHI INFORMATION SCIENCE & TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant