CN113421207A - Visual inspection method, apparatus, product and computer storage medium - Google Patents

Visual inspection method, apparatus, product and computer storage medium Download PDF

Info

Publication number
CN113421207A
CN113421207A CN202110964758.7A CN202110964758A CN113421207A CN 113421207 A CN113421207 A CN 113421207A CN 202110964758 A CN202110964758 A CN 202110964758A CN 113421207 A CN113421207 A CN 113421207A
Authority
CN
China
Prior art keywords
axis support
image
camera
axis
support
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110964758.7A
Other languages
Chinese (zh)
Inventor
张兵兵
蔡恩祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Original Assignee
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinrun Fulian Digital Technology Co Ltd filed Critical Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority to CN202110964758.7A priority Critical patent/CN113421207A/en
Publication of CN113421207A publication Critical patent/CN113421207A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a visual detection method, a device, a product and a computer storage medium, wherein the visual detection method is realized by a support, the support comprises a movable X-axis support, a movable Y-axis support and a movable Z-axis support, the X-axis support, the movable Y-axis support and the movable Z-axis support respectively move in different directions in space, a 3D camera, a 2D camera and a telecentric lens are arranged on the Z-axis support, and the telecentric lens is matched with the 2D camera to be arranged, and the visual detection method comprises the following steps: controlling the X-axis support, the Y-axis support and the Z-axis support to move, and simultaneously acquiring a 3D image of the product through a 3D camera; obtaining a 2D image of a product through the calibrated 2D camera and the telecentric lens, carrying out distortion correction on the 2D image to obtain a corrected image after distortion elimination, carrying out sub-pixel edge extraction on the corrected image to obtain a result image, and determining three-dimensional data of the product according to the 3D image and the result image. The invention improves the precision and the integrity of the three-dimensional data of the obtained product.

Description

Visual inspection method, apparatus, product and computer storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a visual inspection method, apparatus, product, and computer storage medium.
Background
In modern industrial production, a 2D area array or line scan camera is generally used for acquiring the 2D outline dimension of a product, the three-dimensional information of the product cannot be measured, and the assembly and quality control of the subsequent procedures of the product in automatic production are not facilitated. Moreover, the resolution ratio of shooting a product by using the 3D camera is low, and the product is shielded by a shot object, which results in low shooting precision and incomplete data acquisition.
Disclosure of Invention
The invention mainly aims to provide a visual detection method, equipment, a product and a computer storage medium, and aims to solve the problems of incomplete three-dimensional data acquisition and low precision in the prior art.
In order to achieve the above object, the present invention further provides a vision inspection method, which is implemented by a support including a movable X-axis support, a Y-axis support, and a Z-axis support, and the X-axis support, the Y-axis support, and the Z-axis support move in different directions in space, respectively, and a 3D camera, a 2D camera, and a telecentric lens are disposed on the Z-axis support, and the telecentric lens is disposed in cooperation with the 2D camera, the vision inspection method including the steps of:
controlling the X-axis support, the Y-axis support and the Z-axis support to move, and simultaneously acquiring a 3D image of a product through the 3D camera;
the method comprises the steps of calibrating the 2D camera in a preset shooting mode and preset shooting times, obtaining a 2D image of a product through the calibrated 2D camera and the telecentric lens, carrying out distortion correction on the 2D image to obtain a corrected image after distortion is eliminated, carrying out sub-pixel edge extraction on the corrected image to obtain a result image, and determining three-dimensional data of the product according to the 3D image and the result image.
Optionally, the step of calibrating the 2D camera in a preset shooting mode and a preset shooting number includes:
shooting the calibration picture at different postures and different angles of the calibration picture through the calibration plate, wherein the preset shooting frequency is more than or equal to 10 and less than or equal to 20;
and acquiring corner information for each shot calibration picture, determining an inner corner and image coordinates of the inner corner through the calibration board according to the corner information, and calibrating the 2D camera through a camera calibration function.
Optionally, the step of performing sub-pixel edge extraction on the corrected image to obtain a result image includes:
carrying out binarization on the 2D image to obtain a binary image, and determining the sub-pixel profile attribute of the binary image for the detection edge of the binary image;
and processing the sub-pixel outline attribute to obtain a sub-pixel edge, and converting the sub-pixel edge into a coordinate system to obtain a result image.
Optionally, the step of acquiring a 3D image of the product by the 3D camera while controlling the X-axis, Y-axis and Z-axis supports to move includes:
appointing an initial position and an end position of 3D camera shooting, and adjusting the X-axis support, the Y-axis support and the Z-axis support to move the 3D camera to the initial position;
and controlling the X-axis support, the Y-axis support and the Z-axis support to move, so that the 3D camera moves from the initial position to the end position, and acquiring a 3D image of a product in the moving process of the 3D camera.
Optionally, the adjusting the X-axis support, the Y-axis support, and the Z-axis support to move the 3D camera to the initial position includes:
and adjusting the height position of the Z-axis support to enable the depth of field of the product to be within the depth of field range of the telecentric lens, and adjusting the X-axis support and the Y-axis support to enable the 3D camera to move to the initial position.
Optionally, the Z-axis support, the X-axis support and the Y-axis support are perpendicular to each other, the Y-axis support includes a first support and a second support which are opposite and spaced apart from each other, the X-axis support is disposed between the first support and the second support, the Z-axis support is disposed on the X-axis support, wherein the Z-axis support moves along a vertical plane relative to the X-axis support, and the Y-axis support moves along a horizontal plane relative to the X-axis support.
Alternatively, the X-axis support, the Y-axis support and the Z-axis support are respectively provided with motors, and the X-axis support, the Y-axis support and the Z-axis support are independently moved or moved in cooperation with each other within the respective length ranges of the X-axis support, the Y-axis support and the Z-axis support by the respective motors.
Further, to achieve the above object, the present invention provides a vision inspection apparatus including a support including an X-axis support, a Y-axis support, and a Z-axis support that are movable, and the X-axis support, the Y-axis support, and the Z-axis support are respectively moved in different directions in space, the Z-axis support having a 3D camera, a 2D camera, and a telecentric lens provided thereon in cooperation with the 2D camera, the vision inspection apparatus further including:
the detection module is used for controlling the X-axis support, the Y-axis support and the Z-axis support to move and simultaneously acquiring a 3D image of a product through the 3D camera;
the calibration module is used for calibrating the 2D camera in a preset shooting mode and preset shooting times;
and the data processing module is used for acquiring a 2D image of the product through the calibrated 2D camera and the telecentric lens, performing distortion correction on the 2D image to obtain a corrected image after distortion is eliminated, performing sub-pixel edge extraction on the corrected image to obtain a result image, and determining three-dimensional data of the product according to the 3D image and the result image.
Furthermore, to achieve the above object, the present invention also provides a computer program product comprising a computer program which, when being executed by a processor, realizes the steps of the visual inspection method as described above.
In addition, to achieve the above object, the present invention also provides a computer storage medium having a visual inspection program stored thereon, the visual inspection program implementing the steps of the visual inspection method as described above when executed by a processor.
The invention provides a visual detection method, which is realized by a bracket, the bracket comprises an X-axis bracket, a Y-axis bracket and a Z-axis bracket which are movably arranged, wherein a 3D camera and a 2D camera which are respectively used for shooting are arranged on the Z-axis bracket, the 3D camera shoots a 3D image of a product in the moving process of the X-axis bracket, the Y-axis bracket and the Z-axis bracket, the moving directions of the X-axis bracket, the Y-axis bracket and the Z-axis bracket are different, the obtained 3D image is not blocked or lost under the mutual cooperation of the X-axis bracket, the Y-axis bracket and the Z-axis bracket, further, the 2D camera shoots a 2D image of the image by matching with a telecentric lens, the precision of the 2D image is improved, thus, the required 3D image and 2D image can be obtained by arranging the moving paths of the X-axis bracket, the Y-axis bracket and the Z-axis bracket, and further, the product data matched by the 3D image and the 2D image, the three-dimensional data required by the product can be obtained, and the data is complete and high in precision.
Drawings
FIG. 1 is a schematic diagram of the hardware operating environment of the visual inspection method of the present invention;
FIG. 2 is a schematic flow chart illustrating a visual inspection method according to an embodiment of the present invention;
FIG. 3 is a schematic structural view of a stent according to the present invention;
FIG. 4 is a schematic diagram of a corrected image obtained by the vision inspection method according to the present invention;
FIG. 5 is a schematic diagram of a 2D camera cooperating with a telecentric lens in the vision inspection method of the present invention;
FIG. 6 is a diagram of product edges extracted directly from the edges;
FIG. 7 is a diagram of edges of a product extracted by sub-pixels;
FIG. 8 is a block diagram of a visual inspection apparatus according to the present invention.
The reference numbers illustrate:
reference numerals Name (R) Reference numerals Name (R)
100 Support frame 40 3D camera
10 X-axis bracket 50 2D camera
20 Y-axis support 60 Telecentric lens
30 Z-axis bracket 70 Coaxial light source
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic video Experts compress standard Audio Layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video Experts compress standard Audio Layer 3) player, a portable computer, and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in each direction (generally three axes), detect the gravity in the stationary state, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the posture of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a vision inspection program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the visual inspection program stored in the memory 1005 and perform the following operations:
controlling the X-axis support, the Y-axis support and the Z-axis support to move, and simultaneously acquiring a 3D image of a product through the 3D camera;
the method comprises the steps of calibrating the 2D camera in a preset shooting mode and preset shooting times, obtaining a 2D image of a product through the calibrated 2D camera and the telecentric lens, carrying out distortion correction on the 2D image to obtain a corrected image after distortion is eliminated, carrying out sub-pixel edge extraction on the corrected image to obtain a result image, and determining three-dimensional data of the product according to the 3D image and the result image.
Further, the processor 1001 may be configured to invoke a visual inspection program stored in the memory 1005 and perform the following operations:
shooting the calibration picture at different postures and different angles of the calibration picture through the calibration plate, wherein the preset shooting frequency is more than or equal to 10 and less than or equal to 20;
and acquiring corner information for each shot calibration picture, determining an inner corner and image coordinates of the inner corner through the calibration board according to the corner information, and calibrating the 2D camera through a camera calibration function.
Further, the processor 1001 may be configured to invoke a visual inspection program stored in the memory 1005 and perform the following operations:
carrying out binarization on the 2D image to obtain a binary image, and determining the sub-pixel profile attribute of the binary image for the detection edge of the binary image;
and processing the sub-pixel outline attribute to obtain a sub-pixel edge, and converting the sub-pixel edge into a coordinate system to obtain a result image.
Further, the processor 1001 may be configured to invoke a visual inspection program stored in the memory 1005 and perform the following operations:
appointing an initial position and an end position of 3D camera shooting, and adjusting the X-axis support, the Y-axis support and the Z-axis support to move the 3D camera to the initial position;
and controlling the X-axis support, the Y-axis support and the Z-axis support to move, so that the 3D camera moves from the initial position to the end position, and acquiring a 3D image of a product in the moving process of the 3D camera.
Further, the processor 1001 may be configured to invoke a visual inspection program stored in the memory 1005 and perform the following operations:
and adjusting the height position of the Z-axis support to enable the depth of field of the product to be within the depth of field range of the telecentric lens, and adjusting the X-axis support and the Y-axis support to enable the 3D camera to move to the initial position.
The invention provides a visual detection method, which is realized by a support 100, wherein the support 100 comprises a movable X-axis support 10, a movable Y-axis support 20 and a movable Z-axis support 30, and the X-axis support 10, the movable Y-axis support 20 and the movable Z-axis support 30 respectively move in different directions in space, wherein one embodiment of the structure of the X-axis support 10, the movable Y-axis support 20 and the movable Z-axis support 30 is as follows:
referring to fig. 2, the Z-axis support 30, the X-axis support 10, and the Y-axis support 20 are perpendicular to each other, the Y-axis support 20 includes a first support and a second support which are opposite and spaced apart from each other, the X-axis support 10 is disposed between the first support and the second support, the Z-axis support 30 is disposed on the X-axis support 10, wherein the Z-axis support 30 moves along a vertical plane with respect to the X-axis support 10, and the Y-axis support 20 moves along a horizontal plane with respect to the X-axis support 10.
It can be understood that the three-axis supports (the X-axis support 10, the Y-axis support 20, and the Z-axis support 30) are provided to eliminate the limitation of dead zones (i.e., the portions shielded by the subject itself) when the image is captured, thereby avoiding the certainty and inaccuracy of the image and data.
Of course, other embodiments may be provided in which the X-axis support 10, the Y-axis support 20, and the Z-axis support 30 are coupled, such as two of the X-axis support 10, the Y-axis support 20, and the Z-axis support 30 being parallel to each other and perpendicular to the other, depending on the field, desired image data, and the like.
Further, a 3D camera 40, a 2D camera 50 and a telecentric lens 60 are disposed on the Z-axis bracket 30, the telecentric lens 60 is disposed in cooperation with the 2D camera 50, and a coaxial light source 70 is disposed on the Z-axis to cooperate with the shooting of the 2D camera 50 and the 3D camera 40.
It should be noted that, compared with a common lens, the telecentric lens 60 has advantages in eliminating perspective distortion and shielding of an object to be measured due to the perspective distortion in imaging. Moreover, the telecentric lens 60 can be within a certain object distance range, so that the magnification of the obtained image can not change along with the change of the object distance, the common lens has different magnifications because the measured objects are not on the same measuring plane, the telecentric magnification is constant, the telecentric magnification can not change along with the change of the depth of field, no parallax exists, the shooting sizes of the objects with the same size are still the same at different heights, and the shooting is more accurate.
Further, referring to fig. 3, the visual inspection method includes the steps of:
in step S10, the X-axis support 10, the Y-axis support 20, and the Z-axis support 30 are controlled to move, and a 3D image of the product is acquired by the 3D camera 40.
Alternatively, each of the X-axis support 10, the Y-axis support 20, and the Z-axis support 30 is provided with a motor, and the X-axis support 10, the Y-axis support 20, and the Z-axis support 30 are independently moved or moved in cooperation with each other by the motors over their lengths.
Specifically, in this embodiment, each shaft is provided with a motor, so that independent movement of each shaft can be completed, and meanwhile, each motor is controlled by a single PLC, so that combined movement of multiple shafts can be realized.
Step S20, calibrating the 2D camera 50 according to a preset shooting mode and preset shooting times, acquiring a 2D image of the product through the calibrated 2D camera 50 and the telecentric lens 60, performing distortion correction on the 2D image to obtain a corrected image with distortion eliminated, performing sub-pixel edge extraction on the corrected image to obtain a result image, and determining three-dimensional data of the product according to the 3D image and the result image.
Although there is a method of acquiring a 2D contour dimension of a product by using the 2D camera 50 or the line scan camera in the industrial field, the three-dimensional information of the height of the product cannot be measured, and the measurement error is large, which is not favorable for the process assembly and quality control of the product in the automated production.
Specifically, when parallel light generated by coaxial light passes through the semi-transparent semi-reflective lens, part of the parallel light is reflected to the surface of a shot object, the light reflected by the surface of the object passes through the lens and reaches the camera chip through the telecentric lens 60, the light rays are parallel, no perspective deformation or shielding exists, clear shooting of the outline edge of the shot object is guaranteed, and the image is further sent to the display processing system for image analysis, calculation and measurement. By selecting the telecentric lens 60, the measured object within a certain height range can be ensured to be imaged without focusing due to the larger working depth of field, the artificial adjustment error can be eliminated, the processing process can be simplified, the perspective deformation and the shielding condition of the measured object caused by the perspective deformation can be eliminated, and the measurement precision is not influenced. Moreover, the distortion degree of the selected telecentric lens 60 is small through verification, and meanwhile, the distortion degree of the telecentric lens 60 can be further reduced by adopting an image correction algorithm, so that the authenticity of the collected image is fully ensured.
Before shooting a product, the Z-axis height is adjusted during installation, the 2D image shot through the telecentric lens 60 is further subjected to image correction, so that a 2D image with a high-definition edge can be obtained, the software extracts a sub-pixel edge according to the corrected image, and the precision can reach the pixel size (micron) level. In actual use, the X-axis support 10, the Y-axis support 20 and the Z-axis support 30 are controlled to move according to the shot 3D position and 2D position, meanwhile, the 3D camera 40 and the 2D camera 50 shoot, and the images are respectively processed through software and an algorithm, so that three-dimensional data required by a product, such as height information and plane geometric dimension, can be obtained, the data is complete, the precision is high, wherein the coordinate parameters of the 2D image and the 3D point cloud are fused to reconstruct a three-dimensional model, the structure size and the form diagram of the product are obtained, and therefore the complete information of the product is obtained.
Moreover, compare in other structures (if set up the arm, set up the diversified shooting of a plurality of cameras), support structure in this embodiment is simple, and occupation space is little, through the triaxial removal of motor control, simplifies the shooting mode, and the cost is shot in the control.
Based on the foregoing embodiments, in an embodiment, the step S10 includes:
in step S101, an initial position and an end position of the 3D camera 40 are designated, and the X-axis support 10, the Y-axis support 20, and the Z-axis support 30 are adjusted to move the 3D camera 40 to the initial position.
Wherein, preferably, step S101 includes:
step S101-1, adjusting the height position of the Z-axis support 30 to make the depth of field of the product within the depth of field range of the telecentric lens 60, and adjusting the X-axis support 10 and the Y-axis support 20 to move the 3D camera 40 to the initial position.
And step S102, controlling the X-axis support 10, the Y-axis support 20 and the Z-axis support 30 to move, moving the 3D camera 40 from the initial position to the end position, and acquiring a 3D image of the product during the movement of the 3D camera 40.
Specifically, the 3D camera 40 needs to continuously shoot the product within a certain distance to obtain the 3D image, and the 2D camera 50 shoots at a certain position in the moving process, so that the 3D camera 40 can be controlled to move according to the moving track by specifying the initial position and the end position, so as to obtain the 3D image, wherein the height of the Z-axis bracket 30 is controlled, that is, the shooting range of the telecentric lens 60 is adjusted, the depth of field of the product is within the depth of field range of the telecentric lens 60, and the imaging of the measured object within the range is prevented from being focused, and the accuracy is improved.
Based on the foregoing embodiment, a first embodiment is proposed, and step S20 includes:
step S201, shooting the calibration picture at different postures and different angles of the calibration picture through the calibration board, wherein the preset shooting frequency is more than or equal to 10 and less than or equal to 20.
Step S202, obtaining corner information for each shot calibration picture, determining an inner corner and image coordinates of the inner corner through the calibration board according to the corner information, and calibrating the 2D camera 50 through a camera calibration function.
It will be appreciated that the imaging process of the camera is essentially a transformation of the coordinate system. Firstly, converting the points in the space from the world coordinate system to the camera coordinate system, then projecting the points to the imaging plane (image physical coordinate system), and finally converting the data on the imaging plane to the image pixel coordinate system. But distortion is introduced due to lens manufacturing accuracy and variations in the assembly process, resulting in distortion of the original image.
The distortion of the lens is divided into radial distortion and tangential distortion, and the tangential distortion is generated because the lens is not parallel to the plane of the camera sensor (imaging plane) or the image plane, which is mostly caused by the installation deviation of the lens pasted on the lens module, and is rarely encountered in practice, so the distortion can be ignored. Further, radial distortion is distortion distributed along the radius of the lens, which occurs because the rays are more curved at the center of the lens than near the center, while telecentric lens 60 is designed with less distortion, and further, as can be seen from fig. 4, radial distortion mainly includes both barrel distortion and pincushion distortion.
Specifically, the calibration picture needs to be shot by using a calibration plate at different positions, different angles and different postures, so at least 3 pictures are shot to ensure accurate calibration, preferably 10-20 pictures are shot in the embodiment, so that the complex picture data processing process is avoided, and the shooting effect after calibration is good.
Optionally, a checkerboard pattern formed by black and white rectangles or a Halcon calibration board in a dot form is selected, and in order to achieve a high calibration effect, the requirement on the manufacturing precision of the calibration board is high and is in a micron level.
Furthermore, for each calibration picture, angular point information, that is, position information of an intersection of two adjacent black squares is extracted. Drawing the found inner corner point on the chessboard calibration graph, and calibrating by a calibration Camera function after obtaining the inner corner point image coordinate of the chessboard calibration graph, wherein the image shot after calibration is the image after distortion is eliminated.
Based on the foregoing embodiment, a second embodiment is proposed, and step S20 includes:
step S203, carrying out binarization on the 2D image to obtain a binary image, and determining the sub-pixel contour attribute of the binary image for the detected edge of the binary image.
And S204, processing the sub-pixel outline attribute to obtain a sub-pixel edge, and converting the sub-pixel edge into a coordinate system to obtain a result image.
Specifically, sub-pixel edge detection can improve detection accuracy to a sub-pixel level. The sub-pixels are units smaller than the pixels by subdividing the basic unit of the pixels, thereby improving the image resolution. In general, since the sub-pixel edge points exist in a region where the image gradually changes excessively, the sub-pixel positions of the edge points can be obtained by using various methods such as polynomial fitting. The sub-pixel positioning may be understood as a method for improving the edge detection accuracy by using a software algorithm under the condition that the hardware condition of the imaging system is not changed, or an image processing technique capable of making the resolution smaller than one pixel, specifically:
according to the shot image, binarization is carried out, then Canny operators based on Gaussian convolution are used for detecting edges, then attributes of contour objects such as straight lines, circles and ellipses are determined according to the detected contour objects, XLD contours are further processed to obtain sub-pixel edges, then the results are converted into image coordinate system space and the results are displayed, referring to fig. 6 and 7, fig. 6 is a product edge image extracted by direct edges (without sub-pixel extraction), fig. 7 is a product edge image extracted by sub-pixels, and the comparison shows that the hole shape of the product edge image extracted by sub-pixels is closer to a circle and the accuracy of image edge processing is improved.
Based on the foregoing embodiment, the experimenter carries out the experiment measurement contrast to product aperture, and 1~4 are the aperture, compares through the edge that the subpixel was extracted and the edge that direct edge was extracted, and data accuracy has optimized 1.4% ~2.5%, as shown in table 1:
TABLE 1
Pore diameter 1 Aperture 2 Bore diameter 3 Aperture 4
Before optimization 80.7568 45.8542 78.5802 45.2946
After sub-pixel optimization 78.8188 44.8234 77.3126 44.6562
Improving the effect% 2.399797912 2.247994731 1.613129007 1.409439536
Further, in order to achieve the above object, the present invention also provides a vision inspection apparatus, which includes, with reference to fig. 8, a support including an X-axis support, a Y-axis support, and a Z-axis support that are movable and move in different directions in space, respectively, the Z-axis support having a 3D camera, a 2D camera, and a telecentric lens provided thereon in cooperation with the 2D camera, the vision inspection apparatus further including:
the detection module 1 is used for controlling the X-axis support, the Y-axis support and the Z-axis support to move and simultaneously acquiring a 3D image of a product through the 3D camera;
the calibration module 2 is used for calibrating the 2D camera according to a preset shooting mode and preset shooting times;
the data processing module 3 is configured to acquire a 2D image of the product through the calibrated 2D camera and the telecentric lens, perform distortion correction on the 2D image to obtain a corrected image with distortion removed, perform sub-pixel edge extraction on the corrected image to obtain a result image, and determine three-dimensional data of the product according to the 3D image and the result image.
Optionally, the calibration module 2 is further configured to:
shooting the calibration picture at different postures and different angles of the calibration picture through the calibration plate, wherein the preset shooting frequency is more than or equal to 10 and less than or equal to 20;
and acquiring corner information for each shot calibration picture, determining an inner corner and image coordinates of the inner corner through the calibration board according to the corner information, and calibrating the 2D camera through a camera calibration function.
Optionally, the detection module 1 is further configured to:
appointing an initial position and an end position of 3D camera shooting, and adjusting the X-axis support, the Y-axis support and the Z-axis support to move the 3D camera to the initial position;
and controlling the X-axis support, the Y-axis support and the Z-axis support to move, so that the 3D camera moves from the initial position to the end position, and acquiring a 3D image of a product in the moving process of the 3D camera.
Optionally, the detection module 1 is further configured to:
and adjusting the height position of the Z-axis support to enable the depth of field of the product to be within the depth of field range of the telecentric lens, and adjusting the X-axis support and the Y-axis support to enable the 3D camera to move to the initial position.
The present invention further provides a computer program product, the computer program product includes a computer program, when the computer program is executed by a processor, the steps of the visual inspection method described above are implemented, and when the computer program is executed by the processor, the implemented method may refer to each embodiment of the visual inspection method of the present invention, and details are not repeated herein.
The invention also provides a computer storage medium.
The computer storage medium of the present invention has stored thereon a vision inspection program that, when executed by a processor, implements the steps of the vision inspection method described above.
The method implemented when the visual inspection program running on the processor is executed may refer to each embodiment of the visual inspection method of the present invention, and details thereof are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A visual inspection method is characterized in that the visual inspection method is realized by a support, the support comprises a movable X-axis support, a movable Y-axis support and a movable Z-axis support, the X-axis support, the movable Y-axis support and the movable Z-axis support respectively move in different directions in space, a 3D camera, a 2D camera and a telecentric lens are arranged on the Z-axis support, and the telecentric lens is matched with the 2D camera, and the visual inspection method comprises the following steps:
controlling the X-axis support, the Y-axis support and the Z-axis support to move, and simultaneously acquiring a 3D image of a product through the 3D camera;
the method comprises the steps of calibrating the 2D camera in a preset shooting mode and preset shooting times, obtaining a 2D image of a product through the calibrated 2D camera and the telecentric lens, carrying out distortion correction on the 2D image to obtain a corrected image after distortion is eliminated, carrying out sub-pixel edge extraction on the corrected image to obtain a result image, and determining three-dimensional data of the product according to the 3D image and the result image.
2. The visual inspection method of claim 1, wherein the step of calibrating the 2D camera with a preset shot pattern and a preset number of shots comprises:
shooting the calibration picture at different postures and different angles of the calibration picture through the calibration plate, wherein the preset shooting frequency is more than or equal to 10 and less than or equal to 20;
and acquiring corner information for each shot calibration picture, determining an inner corner and image coordinates of the inner corner through the calibration board according to the corner information, and calibrating the 2D camera through a camera calibration function.
3. The visual inspection method of claim 1, wherein said step of performing sub-pixel edge extraction on said corrected image to obtain a resultant image comprises:
carrying out binarization on the 2D image to obtain a binary image, and determining the sub-pixel profile attribute of the binary image for the detection edge of the binary image;
and processing the sub-pixel outline attribute to obtain a sub-pixel edge, and converting the sub-pixel edge into a coordinate system to obtain a result image.
4. The visual inspection method of claim 1, wherein the step of acquiring the 3D image of the product by the 3D camera while controlling the movement of the X-axis support, the Y-axis support, and the Z-axis support includes:
appointing an initial position and an end position of 3D camera shooting, and adjusting the X-axis support, the Y-axis support and the Z-axis support to move the 3D camera to the initial position;
and controlling the X-axis support, the Y-axis support and the Z-axis support to move, so that the 3D camera moves from the initial position to the end position, and acquiring a 3D image of a product in the moving process of the 3D camera.
5. The visual inspection method of claim 4, wherein the step of adjusting the X-axis support, the Y-axis support, and the Z-axis support to move the 3D camera to the initial position comprises:
and adjusting the height position of the Z-axis support to enable the depth of field of the product to be within the depth of field range of the telecentric lens, and adjusting the X-axis support and the Y-axis support to enable the 3D camera to move to the initial position.
6. The visual inspection method of claim 1, wherein the Z-axis mount, the X-axis mount, and the Y-axis mount are perpendicular to each other, respectively, the Y-axis mount includes a first mount and a second mount disposed opposite and spaced apart from each other, the X-axis mount is disposed between the first mount and the second mount, and the Z-axis mount is disposed on the X-axis mount, wherein the Z-axis mount moves along a vertical plane with respect to the X-axis mount, and the Y-axis mount moves along a horizontal plane with respect to the X-axis mount.
7. The visual inspection method of claim 1, wherein the X-axis support, the Y-axis support, and the Z-axis support are provided with motors, respectively, and the X-axis support, the Y-axis support, and the Z-axis support are independently moved or moved in cooperation with each other within the respective length ranges of the X-axis support, the Y-axis support, and the Z-axis support by the respective motors.
8. A vision inspection apparatus comprising a support including a movable X-axis support, a Y-axis support, and a Z-axis support, and the X-axis support, the Y-axis support, and the Z-axis support move in different directions in space, respectively, a 3D camera, a 2D camera, and a telecentric lens disposed in cooperation with the 2D camera, the vision inspection apparatus further comprising:
the detection module is used for controlling the X-axis support, the Y-axis support and the Z-axis support to move and simultaneously acquiring a 3D image of a product through the 3D camera;
the calibration module is used for calibrating the 2D camera in a preset shooting mode and preset shooting times;
and the data processing module is used for acquiring a 2D image of the product through the calibrated 2D camera and the telecentric lens, performing distortion correction on the 2D image to obtain a corrected image after distortion is eliminated, performing sub-pixel edge extraction on the corrected image to obtain a result image, and determining three-dimensional data of the product according to the 3D image and the result image.
9. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the steps of the visual inspection method according to any one of claims 1 to 7.
10. A computer storage medium, characterized in that the computer storage medium has stored thereon a vision inspection program which, when executed by a processor, implements the steps of the vision inspection method according to any one of claims 1 to 7.
CN202110964758.7A 2021-08-23 2021-08-23 Visual inspection method, apparatus, product and computer storage medium Pending CN113421207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110964758.7A CN113421207A (en) 2021-08-23 2021-08-23 Visual inspection method, apparatus, product and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110964758.7A CN113421207A (en) 2021-08-23 2021-08-23 Visual inspection method, apparatus, product and computer storage medium

Publications (1)

Publication Number Publication Date
CN113421207A true CN113421207A (en) 2021-09-21

Family

ID=77719071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110964758.7A Pending CN113421207A (en) 2021-08-23 2021-08-23 Visual inspection method, apparatus, product and computer storage medium

Country Status (1)

Country Link
CN (1) CN113421207A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117214054A (en) * 2023-11-09 2023-12-12 中国人民解放军国防科技大学 Novel video sonde

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140098199A1 (en) * 2010-03-10 2014-04-10 Shapequest, Inc. Systems and methods for 2D image and spatial data capture for 3D stereo imaging
CN110136204A (en) * 2019-03-19 2019-08-16 浙江大学山东工业技术研究院 Sound film top dome assembly system based on the calibration of bilateral telecentric lens camera machine tool position
CN110560373A (en) * 2019-09-02 2019-12-13 湖南大学 multi-robot cooperation sorting and transporting method and system
CN111266254A (en) * 2020-03-17 2020-06-12 欣辰卓锐(苏州)智能装备有限公司 Automatic tracking dispensing equipment based on assembly line
CN112077862A (en) * 2020-09-29 2020-12-15 广汽本田汽车有限公司 Vision photographing robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140098199A1 (en) * 2010-03-10 2014-04-10 Shapequest, Inc. Systems and methods for 2D image and spatial data capture for 3D stereo imaging
CN110136204A (en) * 2019-03-19 2019-08-16 浙江大学山东工业技术研究院 Sound film top dome assembly system based on the calibration of bilateral telecentric lens camera machine tool position
CN110560373A (en) * 2019-09-02 2019-12-13 湖南大学 multi-robot cooperation sorting and transporting method and system
CN111266254A (en) * 2020-03-17 2020-06-12 欣辰卓锐(苏州)智能装备有限公司 Automatic tracking dispensing equipment based on assembly line
CN112077862A (en) * 2020-09-29 2020-12-15 广汽本田汽车有限公司 Vision photographing robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117214054A (en) * 2023-11-09 2023-12-12 中国人民解放军国防科技大学 Novel video sonde
CN117214054B (en) * 2023-11-09 2024-03-01 中国人民解放军国防科技大学 Novel video sonde

Similar Documents

Publication Publication Date Title
US11544874B2 (en) System and method for calibration of machine vision cameras along at least three discrete planes
CN110689579A (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
US9886759B2 (en) Method and system for three-dimensional data acquisition
US6954212B2 (en) Three-dimensional computer modelling
US9275431B2 (en) Method and system for calibrating laser measuring apparatus
CN106767810B (en) Indoor positioning method and system based on WIFI and visual information of mobile terminal
CN111263142B (en) Method, device, equipment and medium for testing optical anti-shake of camera module
CN108581869B (en) Camera module alignment method
CN108074237B (en) Image definition detection method and device, storage medium and electronic equipment
CN110260818B (en) Electronic connector robust detection method based on binocular vision
CN108052869B (en) Lane line recognition method, lane line recognition device and computer-readable storage medium
CN113421207A (en) Visual inspection method, apparatus, product and computer storage medium
CN115855955A (en) Mold surface structure defect detection device and method based on multi-beam laser
US10260870B2 (en) On-line measuring system, datum calibrating method, deviation measuring method and computer-readable medium
CN110455813B (en) Universal system and method for extracting irregular arc edges
CN111798522A (en) Automatic plane position checking method, system and equipment for test prototype
CN108459558B (en) Positioning measurement device and positioning measurement method
KR101574195B1 (en) Auto Calibration Method for Virtual Camera based on Mobile Platform
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
CN112308933B (en) Method and device for calibrating camera internal reference and computer storage medium
CN115684012A (en) Visual inspection system, calibration method, device and readable storage medium
CN114862963A (en) Bonding positioning method, device, equipment and storage medium
CN115103124A (en) Active alignment method for camera module
CN114693626A (en) Method and device for detecting chip surface defects and computer readable storage medium
CN114257703B (en) Automatic detection method and device for splicing and fusing images of four-eye low-light night vision device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210921

RJ01 Rejection of invention patent application after publication