CN112465774A - Air hole positioning method and system in air tightness test based on artificial intelligence - Google Patents

Air hole positioning method and system in air tightness test based on artificial intelligence Download PDF

Info

Publication number
CN112465774A
CN112465774A CN202011345437.0A CN202011345437A CN112465774A CN 112465774 A CN112465774 A CN 112465774A CN 202011345437 A CN202011345437 A CN 202011345437A CN 112465774 A CN112465774 A CN 112465774A
Authority
CN
China
Prior art keywords
image
camera
gray scale
acquiring
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011345437.0A
Other languages
Chinese (zh)
Inventor
刘啟平
赵华
丁群芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Maitou Information Technology Co ltd
Original Assignee
Zhengzhou Maitou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Maitou Information Technology Co ltd filed Critical Zhengzhou Maitou Information Technology Co ltd
Priority to CN202011345437.0A priority Critical patent/CN112465774A/en
Publication of CN112465774A publication Critical patent/CN112465774A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, in particular to an air hole positioning method and system in an air tightness test based on artificial intelligence. The method comprises the following steps: acquiring a first image of the front face of the detection pool by using a first camera, and acquiring a second image of the side face of the detection pool by using a second camera; acquiring a binary image of the image, and acquiring a corresponding region-of-interest image through the binary image; converting the image of the region of interest into a gray scale image to obtain an average gray scale difference; obtaining a target image containing a bubble trajectory line by using a frame difference method, and obtaining depth distances between the bubble trajectory line and the front and the side of the detection pool; adjusting the first camera and the second camera according to the average gray difference and the depth distance compensation to enable the exposure amount of the first camera and the exposure amount of the second camera to be the same; and continuously acquiring the water body image after adjustment, and further acquiring the corresponding depth distance after compensation adjustment so as to obtain accurate air hole coordinates. The embodiment of the invention can eliminate the error caused by the brightness difference caused by different water depths, so that the air hole positioning is more accurate.

Description

Air hole positioning method and system in air tightness test based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an air hole positioning method and system in an air tightness test based on artificial intelligence.
Background
In the immersion bubble method for detecting the air tightness of the workpiece, gas with certain pressure is generally introduced into a closed workpiece cavity, and then the workpiece is immersed into water or other liquid to observe whether bubbles emerge. When bubble eruption is observed, the most concerned problem is how to determine the position of the leakage hole of the workpiece according to the position of the bubble.
In practice, the inventors found that the above prior art has the following disadvantages:
at present, the positioning of the air hole by the bubble immersion method is quite difficult, and the conventional depth camera is difficult to play a role due to the reflection of the device glass. Secondly, when the bubble is positioned by a multi-view camera, the water depth difference between the bubble and the camera is often ignored, and the light irradiation is more difficult at the deeper part of the water, so that the imaging deviation is caused, and a certain error is brought to the final calculation result.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an air hole positioning method and system in an air tightness test based on artificial intelligence, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for positioning an air hole in an air tightness test based on artificial intelligence, including the following steps:
acquiring a first image of the front face of the detection pool by using a first camera, and acquiring a second image of the side face of the detection pool by using a second camera;
obtaining binary images of the first image and the second image through a semantic segmentation network respectively, and obtaining corresponding interested area images through the binary images;
converting the image of the region of interest into a gray scale image, and acquiring the average gray scale difference of the gray scale images of the first image and the second image;
selecting a plurality of frames of first images or second images as original target images, obtaining frame difference images containing bubble track lines by using a frame difference method, obtaining target images containing bubble track lines according to the frame difference images and the original target images, and obtaining key point coordinates in the target images through a key point detection network, wherein the key point coordinates comprise intersection points of the bubble track lines and the surfaces of the workpieces to be detected; respectively obtaining the depth distances between the intersection point and the front and the side of the detection pool;
respectively obtaining a first ideal image gray scale of the first camera and a second ideal image gray scale of the second camera according to the average gray scale difference and the depth distance, and adjusting aperture coefficients of the first camera and the second camera according to the first ideal image gray scale and the second ideal image gray scale in a compensation mode so as to enable the exposure amount of the first camera and the exposure amount of the second camera to be the same;
and acquiring a water body image by using the adjusted first camera and the second camera, and further acquiring the corresponding depth distance after compensation adjustment so as to obtain an accurate air hole coordinate.
Preferably, the step of obtaining the average gray difference includes:
respectively counting the gray value of each pixel in the gray image of the first image and the gray image of the second image, and dividing the sum of the gray values of all the pixels by the number of the pixels to obtain a first average gray of the gray image of the first image and a second average gray of the gray image of the second image;
and taking the absolute value of the subtraction of the first average gray scale and the second average gray scale as the average gray scale difference.
Preferably, the depth distance acquiring step includes:
recording the intersection point as point P, and recording the intersection points of the edge of the workpiece plane far away from the camera side and the pool walls at two sides in the selected image as point S1And point S2At point P and point S1And point S2Detecting as a key point;
and calculating the depth distance between the intersection point P and the front surface and the side surface of the detection pool according to the distance between the key points.
Preferably, the ideal image gray scale acquiring step includes:
establishing a model of the change of the ideal image gray level along with the water depth:
Figure BDA0002799747530000021
wherein, gwjRepresenting the ideal image gray scale, g, of the jth camera0Representing the gray scale of the image of the detection pool without water body shot by the jth camera, ρ representing the Fresnel reflectivity of the water body, nwIndicating the refractive index at the interface of the water and the glass wall,
Figure BDA0002799747530000022
representing the mean coefficient of variation of brightness with water depth, wjThe depth distance between the detected workpiece and the front surface or the side surface of the detection pool is represented, and delta represents the diffuse attenuation coefficient of the brightness of light in water.
Calculating the gray scale g of the first ideal image by a modelw1And a second ideal image gray gw2
Preferably, the step of compensation adjustment comprises:
and compensating and adjusting aperture coefficients of the first camera and the second camera through a corresponding mathematical relation between the exposure and the image field illumination, wherein the image field illumination is as follows:
Ej=kgwj/A2
wherein E isjRepresenting the illuminance of an image field of a jth camera, A representing an aperture coefficient, and k representing a proportionality constant;
in a second aspect, another embodiment of the present invention provides an air hole positioning system in an artificial intelligence-based air tightness test, which includes the following modules:
the image acquisition module is used for acquiring a first image of the front surface of the detection pool by using a first camera, and acquiring a second image of the side surface of the detection pool by using a second camera;
the interesting region acquisition module is used for respectively acquiring binary images of the first image and the second image through a semantic segmentation network and acquiring corresponding interesting region images through the binary images;
the average gray difference acquisition module is used for converting the image of the region of interest into a gray image and acquiring the average gray difference of the gray images of the first image and the second image;
the depth distance acquisition module is used for selecting a plurality of frames of first images or second images as original target images, obtaining frame difference images containing bubble track lines by using a frame difference method, obtaining target images containing the bubble track lines according to the frame difference images and the original target images, and acquiring key point coordinates in the target images through a key point detection network, wherein the key point coordinates comprise intersection points of the bubble track lines and the surfaces of the workpieces to be detected; respectively obtaining the depth distances between the intersection point and the front and the side of the detection pool;
the compensation adjusting module is used for respectively obtaining a first ideal image gray scale of the first camera and a second ideal image gray scale of the second camera according to the average gray scale difference and the depth distance, and adjusting aperture coefficients of the first camera and the second camera according to the first ideal image gray scale and the second ideal image gray scale in a compensation mode so as to enable the exposure amount of the first camera and the exposure amount of the second camera to be the same;
and the air hole coordinate acquisition module is used for acquiring the water body image by utilizing the adjusted first camera and the second camera, and further acquiring the corresponding depth distance after compensation adjustment so as to obtain accurate air hole coordinates.
Preferably, the average gray difference obtaining module further includes:
the average gray scale acquisition module is used for respectively counting the gray scale value of each pixel in the gray scale image of the first image and the gray scale image of the second image, and dividing the sum of the gray scale values of all the pixels by the number of the pixels to obtain a first average gray scale of the gray scale image of the first image and a second average gray scale of the gray scale image of the second image;
and the average gray difference calculation module is used for taking the absolute value of the subtraction of the first average gray and the second average gray as the average gray difference.
Preferably, the depth distance acquiring module further includes:
the key point detection module is used for recording the intersection point as a point P, and recording the intersection points of the edge of the plane of the workpiece to be detected, which is far away from the camera side, and the pool walls at the two sides in the selected image as points S1And point S2At point P and point S1And point S2Detecting as a key point;
and the depth distance calculation module is used for calculating the depth distance between the intersection point P and the front surface and the side surface of the detection pool according to the distance between the key points.
Preferably, the compensation adjustment module further comprises an ideal image gray scale obtaining module, configured to build a model of the ideal image gray scale changing with the water depth:
Figure BDA0002799747530000031
wherein, gwjRepresenting the ideal image gray scale, g, of the jth camera0Representing the gray scale of the image of the detection pool without water body shot by the jth camera, ρ representing the Fresnel reflectivity of the water body, nwIndicating the refractive index at the interface of the water and the glass wall,
Figure BDA0002799747530000032
representing the mean coefficient of variation of brightness with water depth, wjThe depth distance between the detected workpiece and the front surface or the side surface of the detection pool is represented, and delta represents the diffuse attenuation coefficient of the brightness of light in water.
Calculating the gray scale g of the first ideal image by a modelw1And a second ideal image gray gw2
Preferably, the compensation adjustment module further comprises:
the image field illumination calculation module is used for compensating and adjusting the aperture coefficients of the first camera and the second camera through the corresponding mathematical relation between the exposure and the image field illumination, wherein the image field illumination is as follows:
Ej=kgwj/A2
wherein E isjRepresenting the illuminance of an image field of a jth camera, A representing an aperture coefficient, and k representing a proportionality constant;
the invention has the following beneficial effects:
establishing a model of water depth and illumination intensity, correcting camera imaging errors caused by different water depths, and increasing the robustness and accuracy of an air hole positioning result;
the position coordinates of the air holes are calculated by using a key point detection network, and the method is simpler and more convenient compared with the conventional multi-view positioning method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a frame diagram of an air hole positioning method in an artificial intelligence-based air tightness test according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for positioning air holes in an artificial intelligence based air tightness test according to an embodiment of the present invention;
FIG. 3 is a schematic view of the geometric relationship between the detection cell and the surface of the workpiece in the collected image;
fig. 4 is a system diagram of an air hole positioning system in an artificial intelligence based air tightness test according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a method and a system for positioning air holes in an artificial intelligence based air tightness test according to the present invention, with reference to the accompanying drawings and preferred embodiments, and the detailed description thereof will be made below. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of an air hole positioning method and system in an air tightness test based on artificial intelligence in detail with reference to the accompanying drawings.
Referring to fig. 1 and fig. 2, fig. 1 is a block diagram illustrating a method and a system for positioning air holes in an artificial intelligence based air tightness test according to an embodiment of the present invention, and fig. 2 is a flowchart illustrating a method for positioning air holes in an artificial intelligence based air tightness test according to an embodiment of the present invention, where the method includes the following steps:
and S001, acquiring a first image of the front surface of the detection pool by using a first camera, and acquiring a second image of the side surface of the detection pool by using a second camera.
Under the condition of good water definition and stable water, two telecentric cameras with the same parameters, height positions and exposure are respectively arranged on the front surface and the side surface of the glass container for observation.
It should be noted that the embodiment of the present invention is directed to a scenario in which only one air hole exists in a workpiece.
And step S002, respectively obtaining a binary image of the first image and the second image through a semantic segmentation network, and obtaining a corresponding region-of-interest image through the binary image.
The method comprises the following specific steps:
1) and training a semantic segmentation network, and inputting the first image and the second image into the trained network model to obtain a binary image of the first image and the second image.
In the embodiment of the invention, the semantic segmentation network uses a DNN network with an Encoder-Decoder structure.
The specific training content comprises:
a) and taking the first image and the second image which are acquired under the clear and stable water body and contain bubbles as training data sets, labeling the data sets, wherein the bubble label is 1, and the other images are labeled 0. Where 80% of the data was randomly selected as the training set and the remaining 20% as the validation set.
b) Inputting image data and label data into a network, extracting image characteristics by an Encoder (Encoder), and converting the number of channels into the number of categories; the height and width of the feature map are then converted into the size of the input image by a Decoder (Decoder), thereby outputting the class of each pixel.
c) The loss function is trained using a cross entropy loss function.
It should be noted that, because the water contains a large amount of attached bubbles which are mixed and disordered, the embodiment of the present invention only needs to detect the continuous bubbles emerging from the air outlet. Therefore, only a single row of bubbles which emerge from the straight line rule in the effect graph obtained by semantic segmentation is selected.
2) And multiplying the binary image obtained by the semantic segmentation network with the original image to obtain the region-of-interest image.
And step S003, converting the image of the region of interest into a gray scale image, and acquiring the average gray scale difference of the gray scale images of the first image and the second image. The method comprises the following specific steps:
1) and respectively counting the gray value of each pixel in the first image gray image and the second image gray image, and dividing the sum of the gray values of all the pixels by the number of the pixels to obtain a first average gray of the first image gray image and a second average gray of the second image gray image.
2) Taking the absolute value of the subtraction of the first average gray and the second average gray as an average gray difference Δ g:
Δg=|g1-g2|
step S004, selecting a plurality of frames of first images or second images as original target images, obtaining frame difference images containing bubble track lines by using a frame difference method, obtaining target images containing bubble track lines according to the frame difference images and the original target images, and obtaining key point coordinates in the target images through a key point detection network, wherein the key point coordinates comprise intersection points of the bubble track lines and the surfaces of the workpieces to be detected; and respectively obtaining the depth distances between the intersection point and the front and the side of the detection pool.
The method comprises the following specific steps:
1) and selecting a plurality of frames of first images or second images, and superposing the two frames after the difference is made between the two frames, so as to finally obtain the trajectory of the bubble.
In the embodiment of the invention, 50 frames of first images are selected to obtain the bubble trajectory line.
When determining the bubble trajectory, some noisy points in the picture can be excluded according to the position and length of the straight line.
2) And marking the bubble trajectory line as 255, and multiplying the bubble trajectory line by the original image to obtain a target image containing the bubble trajectory line.
3) As shown in FIG. 3, the intersection point of the bubble trajectory line and the measured workpiece plane is denoted as point P, and the intersection points of the side of the workpiece plane away from the camera side and the pool walls on both sides in the image are denoted as points S1And point S2At point P and point S1And point S2Inputting the coordinates of each key point into a trained key point detection network as the key points to obtain the coordinates of each key point.
The training content of the key point detection network comprises the following steps:
a) and taking the image of each trajectory line of 50 frames of first images subjected to frame difference and intersecting the workpiece plane as a data set, randomly selecting 80% of the data set as a training set, and taking the remaining 20% of the data set as a verification set.
b) The labels used for the data are keypoint labels, i.e. keypoints are used in the image to mark the position of two corner points and an intersection point as shown in fig. 3. The marking process is that firstly, the position points corresponding to the target are marked on a single channel with the same size of the data image, and then, the Gaussian kernel is used for processing to form the hot spots of the key points.
c) The loss function is a mean square error loss function.
4) And (4) constructing a mathematical model according to the coordinates of the key points, and calculating the depth distances between the intersection point P and the front and the side of the detection pool.
The specific steps of obtaining the depth distance include:
putting the point P at S1S2The horizontal projection on the line is recorded as a point O, and a mathematical model is established to calculate the distance between the bubble track line and the detection tank wall S1S2Distance OP:
Figure BDA0002799747530000071
wherein x is1Denotes S1Length of O,x2Denotes S2Length of O, D represents S1S2Is a known quantity, d represents the length of OP, d represents the length of the detection cell1Denotes S1Length of P, d2Denotes S2The length of P.
Subtracting the OP length from the OM length of the side surface of the detection pool to obtain a first depth distance PM, S from the intersection point P to the front surface of the detection pool2The length of O is the second depth distance from the intersection point P to the side surface of the detection pool.
And step S005, respectively obtaining a first ideal image gray scale of the first camera and a second ideal image gray scale of the second camera according to the average gray scale difference and the depth distance, and compensating and adjusting aperture coefficients of the first camera and the second camera according to the first ideal image gray scale and the second ideal image gray scale so as to enable the exposure amount of the first camera and the second camera to be the same.
Although the shooting environments of the two cameras are the same at this time, in a real imaging picture, the degrees of blocking light by the water depth are different, so that a certain error is caused when the bubble trajectory is multiplied by the original image in the subsequent network detection.
Specifically, the compensation adjustment step includes:
1) a first ideal image gray scale and a second ideal image gray scale are calculated.
The method comprises the following specific steps:
a) repeatedly taking the data of n groups of different workpiece air holes, and calculating the average gray difference delta g in each group of dataiAnd difference of depth Δ di
Wherein the depth distance difference is an absolute value of a difference between the first depth distance and the second depth distance.
b) Calculating the coefficient of variation epsilon of brightness along with water depthi
Figure BDA0002799747530000072
Wherein epsiloniThe coefficient of variation of the luminance with the depth of water, Δ g, obtained in the i-th group of dataiRepresenting the average gray-scale difference, Δ d, in the ith dataiRepresenting the depth distance difference in the ith set of data.
c) Calculating n-th order epsiloniAveraging to obtain the average variation coefficient of brightness with water depth
Figure BDA0002799747530000074
As an example, n is 100 in the embodiment of the present invention.
d) Establishing a model of the change of the ideal image gray level along with the water depth:
Figure BDA0002799747530000073
wherein, gwjRepresenting the ideal image gray scale, g, of the jth camera0Representing the gray scale of the image of the detection pool without water body shot by the jth camera, ρ representing the Fresnel reflectivity of the water body, nwDenotes the refractive index at the interface between water and glass wall, wjThe depth distance between the detected workpiece and the front surface or the side surface of the detection pool is represented, and delta represents the diffuse attenuation coefficient of the brightness of light in water.
In the embodiment of the invention, rho is 0.62, nw=1.128,δ=1.68。
In the embodiment of the present invention, the checkerboard is used as the background of the mask during shooting, and g is calculated0And the baffle background is tightly attached to the inner wall of the j camera side glass container, and the baffle background is tightly attached to the opposite glass container wall far away from the inner wall of the j camera side glass container at other moments. The use of a baffle background is to eliminate errors caused by differences in the background.
Calculating the gray scale g of the first ideal image by a modelw1And a second ideal image gray gw2And compensating and adjusting the exposure intensity of the cameras according to the difference value of the two values so as to achieve the condition that different cameras have the same exposure intensity when the water depths are different from the same target.
The specific adjusting process is as follows:
and adjusting the aperture coefficient according to the exposure model and the image field illumination E.
Specifically, the exposure amount model H:
H=E·T
where H denotes exposure, E denotes image field illuminance, and T denotes exposure time. E ═ kG/A2G denotes the target brightness, a denotes the aperture coefficient, and k denotes the proportionality constant.
In ideal image gray scale gwjRepresents the brightness G of the object, then Ej=kgwj/A2Respectively obtain E1=kgw1/A1 2,E2=kgw2/A2 2Compared with each other
Figure BDA0002799747530000081
Adjusting the aperture factor
Figure BDA0002799747530000082
The bubble trajectory was allowed to achieve the same exposure in both cameras.
And S006, acquiring a water body image by using the adjusted first camera and the second camera, and further acquiring a corresponding depth distance after compensation adjustment to obtain an accurate air hole coordinate.
Specifically, the first camera and the second camera after exposure intensity compensation are used for continuously collecting images, the images are sent into a key point network, and the depth distance from the adjusted bubble trajectory line without brightness errors to the front side and the side is calculated through a mathematical model, so that the real air hole coordinates can be obtained.
In summary, in the embodiment of the present invention, the telecentric cameras with the same parameters installed on the front and the side of the glass container are used to shoot the bubbles in the same environment, the obtained image is transmitted into the trained neural network to obtain the binary image, the binary image is multiplied by the original image to obtain the image of the region of interest, the image of the region of interest is converted into the grayscale image, the average grayscale difference between the images of the two cameras is calculated, the depth distance between the bubbles and the two cameras is obtained through the key point network, the grayscale changes of the bubbles in different depths are analyzed according to the average grayscale difference and the depth distance difference, the model is established, the illumination intensity of the cameras is corrected and compensated according to the model, and the compensated cameras are used to shoot again to obtain the real position coordinates of the air holes. The embodiment of the invention can eliminate the error caused by the brightness difference caused by different water depths, so that the air hole is more accurately positioned, and the damaged part is conveniently repaired in time.
Based on the same inventive concept as the above method, another embodiment of the present invention further provides an air hole positioning system in an air tightness test based on artificial intelligence, referring to fig. 4, the system includes the following modules: the system comprises an image acquisition module 1001, a region of interest acquisition module 1002, an average gray difference acquisition module 1003, a depth distance acquisition module 1004, a compensation adjustment module 1005 and a pore coordinate acquisition module 1006.
The image acquisition module 1001 is configured to acquire a first image of the front surface of the detection cell by using a first camera, and acquire a second image of the side surface of the detection cell by using a second camera; the region-of-interest obtaining module 1002 is configured to obtain binary images of the first image and the second image through a semantic segmentation network, and obtain corresponding region-of-interest images through the binary images; the average gray difference obtaining module 1003 is configured to convert the region of interest image into a gray map, and obtain an average gray difference of the gray maps of the first image and the second image; the depth distance acquisition module 1004 is configured to select multiple frames of first images or second images as original target images, obtain a frame difference image containing a bubble trajectory line by using a frame difference method, obtain a target image containing a bubble trajectory line according to the frame difference image and the original target images, and acquire a key point coordinate in the target image through a key point detection network, where the key point coordinate includes an intersection point of the bubble trajectory line and a surface of a workpiece to be measured; respectively obtaining the depth distances between the intersection point and the front and the side of the detection pool; the compensation adjusting module 1005 is configured to obtain a first ideal image gray scale of the first camera and a second ideal image gray scale of the second camera according to the average gray scale difference and the depth distance, and adjust aperture coefficients of the first camera and the second camera according to the first ideal image gray scale and the second ideal image gray scale so that the exposure amounts of the first camera and the second camera are the same; the air hole coordinate obtaining module 1006 is configured to collect a water volume image by using the adjusted first camera and the second camera, and further obtain a corresponding depth distance after compensation adjustment, so as to obtain an accurate air hole coordinate.
Preferably, the average gray difference obtaining module further includes:
the average gray scale acquisition module is used for respectively counting the gray scale value of each pixel in the gray scale image of the first image and the gray scale image of the second image, and dividing the sum of the gray scale values of all the pixels by the number of the pixels to obtain a first average gray scale of the gray scale image of the first image and a second average gray scale of the gray scale image of the second image;
and the average gray difference calculation module is used for taking the absolute value of the subtraction of the first average gray and the second average gray as the average gray difference.
Preferably, the depth distance acquiring module further includes:
the key point detection module is used for recording the intersection point as a point P, and recording the intersection points of the edge of the plane of the workpiece to be detected, which is far away from the camera side, and the pool walls at the two sides in the selected image as points S1And point S2At point P and point S1And point S2Detecting as a key point;
and the depth distance calculation module is used for calculating the depth distance between the intersection point P and the front surface and the side surface of the detection pool according to the distance between the key points.
Preferably, the compensation adjustment module further comprises an ideal image gray scale obtaining module, configured to build a model of the ideal image gray scale changing with the water depth:
Figure BDA0002799747530000101
wherein, gwjRepresenting the ideal image gray scale, g, of the jth camera0Representing the gray scale of the image of the detection pool without water body shot by the jth camera, ρ representing the Fresnel reflectivity of the water body, nwIndicating the refractive index at the interface of the water and the glass wall,
Figure BDA0002799747530000102
representing the mean coefficient of variation of brightness with water depth, wjThe depth distance between the detected workpiece and the front surface or the side surface of the detection pool is represented, and delta represents the diffuse attenuation coefficient of the brightness of light in water.
Calculating the gray scale g of the first ideal image by a modelw1And a second ideal image gray gw2
Preferably, the compensation adjustment module further comprises:
the image field illumination calculation module is used for compensating and adjusting the aperture coefficients of the first camera and the second camera through the corresponding mathematical relation between the exposure and the image field illumination, wherein the image field illumination is as follows:
Ej=kgwj/A2
wherein E isjRepresenting the illuminance of an image field of a jth camera, A representing an aperture coefficient, and k representing a proportionality constant;
in summary, in the embodiments of the present invention, an image acquisition module is first used to acquire an image, then an image of the region of interest is acquired by an image acquisition module, the image of the region of interest is converted into a gray-scale image by an average gray-scale difference acquisition module, an average gray-scale difference between images of two cameras is calculated, the depth distance acquisition module is used to obtain the water depth distance from the bubble to the two cameras, the gray-scale changes of the bubble at different water depths are analyzed and a model is built according to the average gray-scale difference acquisition module and the depth distance acquisition module, the illumination intensity of the cameras is corrected and compensated according to a compensation adjustment module, and the compensated camera is used to photograph again by an air hole coordinate acquisition module to finally obtain the real. The embodiment of the invention can eliminate the error caused by the brightness difference caused by different water depths, so that the air hole is more accurately positioned, and the damaged part is conveniently repaired in time.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An air hole positioning method in an air tightness test based on artificial intelligence is characterized by comprising the following steps:
acquiring a first image of the front face of a detection pool by using a first camera, and acquiring a second image of the side face of the detection pool by using a second camera;
obtaining binary images of the first graph and the second graph through the semantic segmentation network respectively, and obtaining corresponding interested area images through the binary images;
converting the image of the region of interest into a gray scale image, and acquiring the average gray scale difference of the gray scale images of the first image and the second image;
selecting a plurality of frames of the first image or the second image as an original target image, obtaining a frame difference image containing a bubble trajectory line by using a frame difference method, obtaining a target image containing the bubble trajectory line according to the frame difference image and the original target image, and obtaining a key point coordinate in the target image through a key point detection network, wherein the key point coordinate comprises an intersection point of the bubble trajectory line and the surface of a workpiece to be detected; respectively obtaining the depth distances between the intersection point and the front and the side of the detection pool;
respectively obtaining a first ideal image gray scale of the first camera and a second ideal image gray scale of the second camera according to the average gray scale difference and the depth distance, and compensating and adjusting aperture coefficients of the first camera and the second camera according to the first ideal image gray scale and the second ideal image gray scale so as to enable the exposure amount of the first camera and the second camera to be the same;
and acquiring a water body image by using the adjusted first camera and the second camera, and further acquiring a corresponding depth distance after compensation adjustment so as to obtain an accurate air hole coordinate.
2. The method for locating the air holes in the artificial intelligence based air tightness test according to claim 1, wherein the step of obtaining the average gray level difference comprises:
respectively counting the gray value of each pixel in the gray image of the first image and the gray image of the second image, and dividing the sum of the gray values of all the pixels by the number of the pixels to obtain a first average gray of the gray image of the first image and a second average gray of the gray image of the second image;
and taking the absolute value of the subtraction of the first average gray scale and the second average gray scale as the average gray scale difference.
3. The method for locating the air holes in the artificial intelligence based air tightness test according to claim 1, wherein the step of obtaining the depth distance comprises:
recording the intersection point as a point P, and recording the intersection points of the edge of the plane of the workpiece far away from the camera side and the pool walls at two sides in the selected image as points S1And point S2With said point P, point S1And point S2Detecting as a key point;
and calculating the depth distance between the intersection point P and the front surface of the detection pool and the side surface of the detection pool according to the distance between the key points.
4. The method for locating the air holes in the artificial intelligence based air tightness test according to claim 1, wherein the ideal image gray scale obtaining step comprises:
establishing a model of the change of the ideal image gray level along with the water depth:
Figure FDA0002799747520000021
wherein, gwjRepresenting the ideal image gray scale, g, of the jth camera0Representing the gray scale of the image of the detection pool without water body shot by the jth camera, ρ representing the Fresnel reflectivity of the water body, nwIndicating the refractive index at the interface of the water and the glass wall,
Figure FDA0002799747520000022
representing the mean coefficient of variation of brightness with water depth, wjThe depth distance between the detected workpiece and the front surface or the side surface of the detection pool is represented, and delta represents the diffuse attenuation coefficient of the brightness of light in water;
calculating the first ideal image gray scale g through the modelw1And said second ideal image gray scale gw2
5. The method of claim 4, wherein the step of adjusting the compensation comprises:
adjusting the aperture factor of the first camera and the second camera by compensating the corresponding mathematical relationship between the exposure and the image field illumination, wherein the image field illumination is:
Ej=kgwj/A2
wherein E isjThe illuminance of the image field of the jth camera is represented, a represents the aperture coefficient, and k represents the proportionality constant.
6. An air hole positioning system in air tightness test based on artificial intelligence is characterized by comprising the following modules:
the image acquisition module is used for acquiring a first image of the front surface of the detection pool by using a first camera, and acquiring a second image of the side surface of the detection pool by using a second camera;
the interesting region acquisition module is used for respectively acquiring binary images of the first image and the second image through a semantic segmentation network and acquiring corresponding interesting region images through the binary images;
the average gray difference acquisition module is used for converting the image of the region of interest into a gray map and acquiring the average gray difference of the gray maps of the first image and the second image;
the depth distance acquisition module is used for selecting a plurality of frames of the first images or the second images as original target images, obtaining frame difference images containing bubble track lines by using a frame difference method, obtaining target images containing the bubble track lines according to the frame difference images and the original target images, and acquiring key point coordinates in the target images through a key point detection network, wherein the key point coordinates comprise intersection points of the bubble track lines and the surfaces of the workpieces to be detected; respectively obtaining the depth distances between the intersection point and the front and the side of the detection pool;
the compensation adjusting module is used for respectively obtaining a first ideal image gray scale of the first camera and a second ideal image gray scale of the second camera according to the average gray scale difference and the depth distance, and adjusting aperture coefficients of the first camera and the second camera according to the first ideal image gray scale and the second ideal image gray scale in a compensation mode so that the exposure amount of the first camera and the exposure amount of the second camera are the same;
and the air hole coordinate acquisition module is used for acquiring a water body image by utilizing the adjusted first camera and the second camera so as to acquire a corresponding depth distance after compensation adjustment, so as to obtain an accurate air hole coordinate.
7. The system of claim 6, wherein the average gray level difference obtaining module further comprises:
an average gray scale obtaining module, configured to count gray scale values of each pixel in the gray scale map of the first image and the gray scale map of the second image, respectively, and divide the sum of the gray scale values of all the pixels by the number of the pixels to obtain a first average gray scale of the gray scale map of the first image and a second average gray scale of the gray scale map of the second image;
and the average gray difference calculation module is used for taking an absolute value obtained by subtracting the first average gray and the second average gray as the average gray difference.
8. The system of claim 6, wherein the depth distance obtaining module further comprises:
a key point detection module for recording the intersection point as a point P and recording the intersection points of the edge of the plane of the workpiece far away from the camera side and the pool walls at two sides in the selected image as points S1And point S2With said point P, point S1And point S2Detecting as a key point;
and the depth distance calculation module is used for calculating the depth distance between the intersection point P and the front surface and the side surface of the detection pool through the distance between the key points.
9. The air hole positioning system in the artificial intelligence based air tightness test as claimed in claim 6, wherein said compensation adjusting module further comprises an ideal image gray scale obtaining module for establishing a model of ideal image gray scale variation with water depth:
Figure FDA0002799747520000031
wherein, gwjRepresenting the ideal image gray scale, g, of the jth camera0Representing the gray scale of the image of the detection pool without water body shot by the jth camera, ρ representing the Fresnel reflectivity of the water body, nwIndicating the refractive index at the interface of the water and the glass wall,
Figure FDA0002799747520000032
representing the mean coefficient of variation of brightness with water depth, wjThe depth distance between the detected workpiece and the front surface or the side surface of the detection pool is represented, and delta represents the diffuse attenuation coefficient of the brightness of light in water;
calculating the first ideal image gray scale g through the modelw1And said second ideal image gray scale gw2
10. The system of claim 9, wherein the compensation adjustment module further comprises:
the image field illumination calculation module is used for compensating and adjusting the aperture coefficients of the first camera and the second camera through the corresponding mathematical relation between the exposure amount and the image field illumination, wherein the image field illumination is as follows:
Ej=kgwj/A2
wherein E isjThe illuminance of the image field of the jth camera is represented, a represents the aperture coefficient, and k represents the proportionality constant.
CN202011345437.0A 2020-11-25 2020-11-25 Air hole positioning method and system in air tightness test based on artificial intelligence Withdrawn CN112465774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011345437.0A CN112465774A (en) 2020-11-25 2020-11-25 Air hole positioning method and system in air tightness test based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011345437.0A CN112465774A (en) 2020-11-25 2020-11-25 Air hole positioning method and system in air tightness test based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN112465774A true CN112465774A (en) 2021-03-09

Family

ID=74808439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011345437.0A Withdrawn CN112465774A (en) 2020-11-25 2020-11-25 Air hole positioning method and system in air tightness test based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112465774A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113567058A (en) * 2021-09-22 2021-10-29 南通中煌工具有限公司 Light source parameter adjusting method based on artificial intelligence and visual perception
CN113916456A (en) * 2021-09-06 2022-01-11 南京金陵特种设备安全附件检验中心 Safety valve sealing test method
CN116756477A (en) * 2023-08-23 2023-09-15 深圳市志奋领科技有限公司 Precise measurement method based on Fresnel diffraction edge characteristics

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113916456A (en) * 2021-09-06 2022-01-11 南京金陵特种设备安全附件检验中心 Safety valve sealing test method
CN113567058A (en) * 2021-09-22 2021-10-29 南通中煌工具有限公司 Light source parameter adjusting method based on artificial intelligence and visual perception
CN116756477A (en) * 2023-08-23 2023-09-15 深圳市志奋领科技有限公司 Precise measurement method based on Fresnel diffraction edge characteristics
CN116756477B (en) * 2023-08-23 2023-12-26 深圳市志奋领科技有限公司 Precise measurement method based on Fresnel diffraction edge characteristics

Similar Documents

Publication Publication Date Title
CN112465774A (en) Air hole positioning method and system in air tightness test based on artificial intelligence
US20220148213A1 (en) Method for fully automatically detecting chessboard corner points
CN104966308B (en) A kind of method for calculating laser beam spot size
CN104173054B (en) Measuring method and measuring device for height of human body based on binocular vision technique
CN107633536A (en) A kind of camera calibration method and system based on two-dimensional planar template
CN107025670A (en) A kind of telecentricity camera calibration method
CN106683070A (en) Body height measurement method and body height measurement device based on depth camera
CN109827502A (en) A kind of line structured light vision sensor high-precision calibrating method of calibration point image compensation
CN106996748A (en) Wheel diameter measuring method based on binocular vision
CN105444696B (en) A kind of binocular ranging method and its application based on perspective projection line measurement model
CN106023153B (en) A kind of method of bubble in measurement water body
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN105184857A (en) Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN113066076B (en) Rubber tube leakage detection method, device, equipment and storage medium
CN109325927B (en) Image brightness compensation method for industrial camera photogrammetry
CN109003312A (en) A kind of camera calibration method based on nonlinear optimization
CN114972085A (en) Fine-grained noise estimation method and system based on contrast learning
CN110223355A (en) A kind of feature mark poiX matching process based on dual epipolar-line constraint
CN104574312A (en) Method and device of calculating center of circle for target image
CN113763346A (en) Binocular vision-based method for detecting facade operation effect and surface defect
CN114998308A (en) Defect detection method and system based on photometric stereo
CN112132830A (en) Air tightness detection water body shaking sensing method and system based on artificial intelligence
CN117333860A (en) Ship water gauge reading method and device based on deep learning
CN108180825A (en) A kind of identification of cuboid object dimensional and localization method based on line-structured light
CN117218057A (en) New energy battery pole welding line defect detection method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210309

WW01 Invention patent application withdrawn after publication