CN114882101A - Sealed container leakage amount measuring method based on deep learning and image processing - Google Patents

Sealed container leakage amount measuring method based on deep learning and image processing Download PDF

Info

Publication number
CN114882101A
CN114882101A CN202210809336.7A CN202210809336A CN114882101A CN 114882101 A CN114882101 A CN 114882101A CN 202210809336 A CN202210809336 A CN 202210809336A CN 114882101 A CN114882101 A CN 114882101A
Authority
CN
China
Prior art keywords
white
bubble
black
pixel
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210809336.7A
Other languages
Chinese (zh)
Inventor
詹曙
宋万
丁正龙
臧怀娟
刘睿哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202210809336.7A priority Critical patent/CN114882101A/en
Publication of CN114882101A publication Critical patent/CN114882101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01FMEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
    • G01F22/00Methods or apparatus for measuring volume of fluids or fluent solid material, not otherwise provided for
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M3/00Investigating fluid-tightness of structures
    • G01M3/02Investigating fluid-tightness of structures by using fluid or vacuum
    • G01M3/04Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point
    • G01M3/06Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point by observing bubbles in a liquid pool
    • G01M3/10Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point by observing bubbles in a liquid pool for containers, e.g. radiators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Fluid Mechanics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for measuring leakage of a sealed container based on deep learning and image processing, which comprises the following steps: step1, obtaining an image when bubbles are generated due to leakage of a sealed container; step2, extracting features by using YOLOv5, and tracking by using a target tracking algorithm to obtain a bubble capturing frame picture of each frame image of each bubble in the image; step 3, performing image processing on the bubble intercepting frame picture of each frame image of each bubble obtained in the step2,obtaining geometric data and coordinate data of each bubble; step 4, calculating to obtain the volume of each bubble based on the geometric data and the coordinate data of each bubble; step 5, the volume of all the bubbles
Figure 103617DEST_PATH_IMAGE001
The amounts of leakage from the sealed containers are summed. The method can intelligently process the leakage rate of the sealed container, and has the advantages of high accuracy and high speed.

Description

Sealed container leakage amount measuring method based on deep learning and image processing
Technical Field
The invention relates to the field of leakage measuring methods, in particular to a method for measuring leakage of a sealed container based on deep learning and image processing.
Background
The sealed container is a sealed space which needs to be completely isolated from the outside in the working process except for a specific inlet and a specific outlet. Common sealed containers for vehicles, gas storage tanks, air-conditioning compressors, oil pipelines and the like in life belong to the sealed containers. Many researches on airtight detection methods of sealed containers have been made, and the methods are mainly classified into qualitative measurement and quantitative measurement.
The qualitative detection mainly focuses on whether the sealed container or the pipeline leaks or not, but cannot give the leakage amount through accurate data, and mainly comprises the traditional manual water immersion method, ultrasonic detection and infrared detection. The quantitative detection is a method capable of obtaining the magnitude of the leakage amount, and includes a trace gas method, a pressure sensor detection method, and the like.
With the fact that digital image processing technology is more extensive in the field of product defect detection, the tightness detection technology based on machine vision is gradually paid more attention by researchers, and more researchers add machine vision on the basis of the traditional manual water immersion method to replace manual observation. The bubble method based on machine vision has the advantages of low cost, capability of positioning leakage points and the like.
However, the existing bubble method based on machine vision is improved based on the traditional water immersion method, and has the following problems: (1) the application environment requirement is high, and in the bubble detection process by computer vision, bubbles need to be accurately identified for subsequent treatment, so that a stable light source is needed and the cleanness of a water body is kept, but a workpiece to be detected is directly put into water, so that the problem of water body pollution is caused, and the subsequent detection work is not facilitated; (2) the air tightness detection scene generally comprises an illumination effect, the light and shade change of industrial site light is complex, the quality of obtained images is uneven, and the application effect of the traditional image detection and segmentation algorithm is poor; (3) after the measured workpiece is detected in water, the surface of the workpiece needs to be dried and stain-removed, so that the energy consumption is increased; (4) the leakage quantity of the workpiece to be measured cannot be quantitatively described; (5) the device is not suitable for simultaneous detection of a plurality of sealed containers, and the detection efficiency is not high.
Disclosure of Invention
The invention aims to provide a method for measuring the leakage quantity of a sealed container based on deep learning and image processing, and provides a mechanism for dry detection of the air tightness of the sealed container based on deep learning in order to overcome the limitations of the technology for detecting the air tightness of the sealed container. The detection mechanism is characterized in that a whole set of automatic detection device (taking a filter with extremely large demand in the automobile industry as an example) is designed, gas leaked from a sealed container is introduced into a water box and is expressed in a form of bubbles, and the volume of the bubbles is measured through DCNN and bubble volume modeling, so that the leakage amount (1) of the sealed container is indirectly obtained, and compared with the detection method for the machine vision tightness based on the traditional water immersion method improvement, the method avoids various problems in detection of directly placing a workpiece to be detected in water; (2) compared with a high-precision pressure sensor, the influence of the environmental temperature on the stability of the measurement result in the measurement process is avoided; in addition, the method can timely discharge the leakage amount in the measuring process to the atmosphere in the form of bubbles, and the pressure of the sealing cavity between the sealing gas hood and the sealing container is kept almost unchanged and is not limited by the range of the pressure sensor. The problems encountered in the tightness detection process can be effectively solved.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the method for measuring the leakage quantity of the sealed container based on deep learning and image processing comprises the following steps:
step1, acquiring an image when bubbles are generated in liquid when a sealed container leaks;
step2, taking the image collected in the step1 as input, adopting YOLOv5 to perform feature extraction, and adopting a target tracking algorithm to track each bubble in the image as a target, thereby obtaining a bubble capturing frame picture of each frame image of each bubble in the image;
step 3, performing image processing on the bubble intercepting frame picture of each frame image of each bubble obtained in the step2 to obtain geometric data and coordinate data of each bubble;
step 4, calculating the volume of each bubble by using the following formula based on the geometric data and the coordinate data of each bubble obtained in the step 3
Figure 796307DEST_PATH_IMAGE001
The formula is as follows:
Figure 306923DEST_PATH_IMAGE002
wherein the content of the first and second substances,v the flow velocity of the gas is represented, namely the movement velocity of the bubble is determined according to the coordinate data of the bubble;
the coefficient of fluid resistance is determined according to the liquid medium in which the sealed container is positioned, and the scheme uses pure water;
s is the area of the windward side of the bubble and is determined according to the geometric data of the bubble obtained in the step 3;
Figure 555501DEST_PATH_IMAGE003
is the acceleration of gravity;
Figure 544186DEST_PATH_IMAGE004
acceleration of the bubble in the vertical direction;
step 5, the volumes of all the air bubbles obtained in the step 4
Figure 827400DEST_PATH_IMAGE001
And adding the obtained products to obtain the leakage amount of the sealed container.
Further, in YOLOv5 adopted in step2, the original CSP convolution kernel in YOLOv5 is replaced by an asymmetric convolution block;
the asymmetric convolution block includes three convolution kernels, namely a3 × 3 × c convolution kernel, a3 × 1 × c convolution kernel, and a1 × 3 × c convolution kernel, wherein:
the 3 x 3 xc convolution kernel is a regular convolution, and the 3 x 3 xc convolution kernel is used for extracting the basic characteristics of bubbles in the image;
the 3 × 1 × c convolution kernel and the 1 × 3 × c convolution kernel are respectively vertical convolution kernels and horizontal convolution kernels, the vertical convolution kernels and the horizontal convolution kernels respectively correspond to the longitudinal features and the horizontal features of the bubbles in the extracted image, and the two convolution kernels can extract the position features and the rotation features of the target.
Further, in step2, a depsort algorithm is adopted for target tracking.
Further, in step 3, when image processing is performed on the captured frame image of each bubble obtained in step2, a binary image is obtained after gaussian blur processing, sobel operator edge extraction processing and hole filling processing are sequentially performed, the bubble part in the binary image is composed of white pixel points, the rest part is composed of black pixel points, then cracking filling processing is performed, and the cracking filling processing process is as follows:
step1, initial filling:
go up from last down to the pixel in the binary image earlier and traverse, follow in proper order again and turn right, turn left order from a left side and traverse from the right side, judge during the traversal whether there is the region that splits in the binary image, wherein:
in the process of going from left to right, if the crack area exists, the crack area is separated from the edge of the bubble by a distance, and the two conditions are divided into two conditions:
(a1) if the sequence of the pixel points is white, black and black, marking the first black pixel point as a line starting point as S1;
(a2) if the sequence of the pixel points is white, white and black, marking the black pixel points as the line starting points, and recording as S2;
during the process of going from left to right, if no cracked area exists, if the sequence of the pixel points is black, white and white, marking the last white pixel point as a line starting point, and recording as S3;
in the process of going from right to left, if a crack area exists, the crack area is separated from the edge of the bubble by a distance, and the two conditions are divided into two conditions:
(a3) if the sequence of the pixel points is white, black and black, marking the white pixel points as line termination points, and recording as E1;
(a4) if the sequence of the pixel points is white, white and black, marking the last white pixel point as a line termination point, and marking as E2;
during the process of going from right to left, if no crack area exists, if the sequence of the pixel points is black, white and white, marking the last white pixel point as a line termination point, and recording as E3;
after all traversal and marking, enabling the line starting point and the line ending point to correspond to corresponding lines, changing the pixel value from the line starting point to the line ending point of each line into 255, and finishing the initial filling of the bubbles after the hole filling;
step2, final fill:
after filling according to Step1, traversing pixel points from left to right, and then sequentially traversing according to the sequence from top to bottom and from bottom to top, wherein:
in the process of going from top to bottom, if a crack area exists, the two conditions are divided according to the distance between the crack area and the edge of the bubble:
(b1) if the sequence of the pixel points is white, black and black, marking the first black pixel point as a column starting point;
(b2) if the sequence of the pixel points is white, white and black, marking the black pixel points as the starting points of the columns;
when the sequence of the pixel points is black, white and white in the process of going from top to bottom and if no crack area exists, marking the last white pixel point as a column starting point;
when from down up traversing the condition, if there is the rupture zone, be apart from the bubble edge distance according to rupture zone, divide into two kinds of situations:
(b3) if the sequence of the pixel points is white, black and black, marking the white pixel points as the row termination points;
(b4) if the sequence of the pixel points is white, white and black, marking the last white pixel point as a column termination point;
when the condition of traversing from bottom to top is met, if no cracked area exists, if the sequence of the pixel points is black, white and white, marking the black pixel points as row termination points;
finally, the pixel values of the starting points and the ending points of the columns corresponding to all the columns are changed into 1 according to a left-closed and right-opened mode, the pixels represent white according to the pixel values 1, and the pixels represent black and are converted into pictures of the improved filling algorithm;
after the whole image processing process is completed, geometric data and coordinate data of each bubble are obtained based on the pixel points.
Further, in the gaussian blurring processing in step 4, ksize of the gaussian kernel is (7, 7).
Further, when the sobel operator extracts the edge in step 4, the ksize of the sobel operator is (3, 3).
The invention provides a method for measuring leakage quantity of a sealed container based on deep learning and image processing.
In the invention, the deep learning comprises target detection and target tracking, wherein the improved yolov5 is used for the target detection, and the deepsort is used for the target tracking; in the traditional image processing, a Gaussian blur and sobel operator extraction edge and hole filling algorithm and a cracking filling algorithm FEC (forward error correction) originally invented by the invention are adopted, finally, the volume of each bubble is calculated through a formula, and the volumes of all bubbles are accumulated to obtain the leakage rate of the sealed container, so that the indirect measurement of the leakage rate of the sealed container is realized.
The invention has the beneficial effects that: the leakage amount of the sealed container can be obtained through intelligent processing only by operating the bubble leakage video of the sealed container once, and the intelligent processing method has the advantages of high accuracy and high processing speed.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
Fig. 2 is a schematic diagram of an asymmetric volume block of the present invention.
FIG. 3 is a schematic diagram of an image processing principle of the crack filling algorithm of the present invention, wherein (a) is a schematic diagram of a horizontal crack bubble, (b) is a schematic diagram of a vertical crack bubble, (c) a shaded portion in the diagram is a horizontal mark operator, S1-S3 indicates a start pixel position of a marked row, E1-E3 indicates an end pixel position of a marked row, and (d) a shaded portion in the diagram is a vertical mark operator, S1-S3 indicates a start pixel position of a marked column, and E1-E3 indicates an end pixel position of a marked column.
FIG. 4 is a force analysis diagram of the bubble according to the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, the method for measuring the leakage amount of the sealed container based on deep learning and image processing of the present invention includes first placing the sealed container to be measured with leakage in liquid (the following example is explained by using liquid as water), then collecting images of the sealed container with leakage when a plurality of bubbles are generated in the water by a camera, and finally obtaining the leakage amount of the sealed container based on the processing and subsequent calculation of the images.
The processing and calculation of the invention comprises several steps of deep learning (realizing target detection and tracking), image processing and data processing. The target detection uses modified yolov5, and the target tracking uses depsort. The image processing adopts Gaussian blur and sobel operators in the traditional image processing to extract edges and fill holes, and then the original cracking filling algorithm FEC of the invention is combined. The data processing only needs to operate the bubble leakage video of the filter once, the leakage amount of the automobile filter can be accurately calculated, the data processing comprises the steps of extracting, analyzing and storing the data, corresponding speed and acceleration are obtained by processing the bubble mass center coordinate data with specific numbers, the speed and the acceleration are brought into a subsequent bubble volume calculation formula together with the bubble area data to calculate the bubble volume, and the volume of all numbered bubbles is accumulated to obtain the leakage amount of the filter.
The following is a description of a specific process of the present invention.
Firstly, target detection and tracking based on deep learning:
the deep learning includes target detection and target tracking.
The method adopts YOLOv5 for target detection, and although YOLOv5 has the advantages of quick identification, self-adaptive anchor frame and the like, the method has insufficient capacity of extracting the small target features such as bubbles. Therefore, in order to improve the feature extraction capability of the YOLOv5 backbone network on the target, the invention provides an improvement on the basis of the YOLOv5 algorithm so as to increase the detection capability on the target which is small like a bubble and has a complex light source environment.
Specifically, for reference to the Acnet, the present invention designs an asymmetric convolution block ac.b (asymmetric convolution blocks) as the basic convolution of the CSP structure in the yollov 5 backbone network, where the structure of the asymmetric convolution block ac.b is shown in fig. 2. The asymmetric convolution block ac.b consists of three convolution kernels, 3 × 3 xc, 3 × 1 xc, and 1 × 3 xc, where:
the 3 x c convolution kernel is a regular convolution and can extract the basic features of the image; 3 × 1 × c and 1 × 3 × c are respectively vertical and horizontal convolution kernels that can extract vertical and horizontal features of the bubble image, and both of these convolution kernels can extract the positional feature and the rotational feature of the object.
Therefore, the improved YOLOv5 backbone network has stronger bubble feature extraction capability. The collected images of the sealed container when bubbles are generated in water are input into a backbone network of YOLOv5, and the bubbles in the images are detected through training of YOLOv5, so that the characteristics of the bubbles can be extracted.
Because the convolution calculation meets the superposition principle formula, the asymmetric convolution block Ac.B can directly replace the convolution kernel of CSP in the YOLOv5 network, and after the characteristic extraction, the characteristic superposition is carried out according to the following formula (1):
Figure 243338DEST_PATH_IMAGE005
(1),
in the formula (1), the first and second groups,Iin order to be an input, the user can select,K 1 andK 2 are two convolution kernels of compatible size.
In the training stage of the improved YOLOv5, three convolution kernels in the asymmetric convolution block Ac.B are trained independently, in the inference stage, the weights of the three convolution kernels are fused into a regular convolution form, and then inference calculation is carried out, so that inference time is not increased additionally.
In the invention, the target tracking adopts deepsort, the deepsort and the improved yolov5 are combined for use, and tracking numbering can be carried out on each bubble, so that the data of each bubble in each frame can be reserved, and the data processing is convenient to carry out later.
The process flow of deepsort for each frame is as follows: the detector gets bbox → generation of detections → kalman filter prediction → matching predicted tracks with detections in the current frame using the hungarian algorithm (cascade matching and IOU matching) → kalman filter update.
Frame 0: the detector detects 3 detections, and there are currently no tracks, and the 3 detections are initialized to tracks.
Frame 1: the detector detects 3 detections again, and for the tracks in the Frame 0, a new track is obtained by prediction, then the new track is matched with the detections by using the Hungarian algorithm to obtain a (track, detection) matching pair, and finally the corresponding track is updated by using the detection in each pair.
After tracking and feature extraction are carried out on each bubble in the image by the method, a feature image of each bubble in the image can be obtained.
Secondly, image processing:
the image processing of the invention takes the characteristic image of each bubble obtained by the previous deep learning as a processing object, and specifically combines the traditional image processing and the self-created filling algorithm FEC of the invention.
According to the method, the traditional image processing is carried out on the bubble intercepting frame obtained by target detection, after Gaussian blur and sobel operators are used for extracting edges and filling holes in sequence, the area and the centroid coordinate of the bubbles in the detection frame are extracted by adopting a cracking filling algorithm FEC, and the area and the centroid coordinate of each bubble are combined with the coordinates of the detection frame obtained by target detection, so that the accurate centroid coordinate and the accurate area of each bubble are obtained.
In the present invention, the ksize of Gaussian blur Gaussian kernel is best (7,7), the ksize of sobel operator is best (3,3), cout _ dst in hole filling is best 11, and the crack filling algorithm FEC is applicable to bubbles of different sizes. A gaussian kernel size of 7 x 7 works well, with larger gaussian kernels giving more cracking and less complete bubble shape but with surrounding noise points eliminated, whereas smaller gaussian kernels give more complete filling of bubble shape but with surrounding noise points.
During hole filling treatment, the complement of the original image is used as Mask to limit the expansion result; taking a black image with a white frame as an initial Marker, and continuously expanding the initial Marker by using SE until convergence; and finally, supplementing the Marker to obtain a final image, and subtracting the final image from the original image to obtain a filling image.
After the holes are filled, the further detection is carried out through the cracking filling algorithm FEC of the invention, which is described as follows:
the crack filling algorithm FEC is suitable for bubbles with different sizes and different color backgrounds, greatly improves the effect of the original hole filling algorithm, and can improve the accuracy of the centroid coordinate and the area of the bubble.
The air bubbles are cracked sometimes after the holes are filled, the air bubbles need to be further filled by adopting the cracking filling algorithm FEC, and the method is specifically divided into two steps of initial filling and final filling:
step1, preliminary filling (horizontal filling):
go up from last down to pixel point earlier and traverse according to going right, following the order that turns left from a left side in proper order again, judge that binarization image has the area of splitting during the traversal, divide into two kinds of circumstances this moment, have the area of splitting or do not have near bubble edge.
For the case of traversing from left to right (as shown by the left-side shaded portion in fig. 3 (c)), if there is a crack region, there are two cases according to the distance between the crack region and the bubble edge:
(a1) one is to mark the first black pixel as the line start point if the sequence of the pixels is white, black and black, and is marked as S1.
(a2) If the sequence of the pixels is white, white and black, the black pixels are marked as the line starting point and are marked as S2.
For the case of traversing from left to right, if there is no split area and the sequence of the pixel points is black, white, and white, the last white pixel point is marked as the line starting point, which is marked as S3.
For the case of traversing from right to left (as shown in fig. 3 (c), if there is a split area, the split area is separated from the edge of the bubble by a distance, which is divided into two cases:
(a3) one is to mark the white pixel as the line termination point if the sequence of the pixel is white, black and black, and is marked as E1.
(a4) If the sequence of the pixel points is white, white and black, marking the last white pixel point as a line termination point, and marking the line termination point as E2.
For the case of traversing from right to left, if there is no split area and the sequence of the pixel points is black, white, and white, the last white pixel point is marked as a line termination point and is marked as E3.
After the end of the process at Step 1: the line start point and the line end point correspond to the corresponding line, and the pixel value from the line start point to the line end point of each line is changed to 255, so that the bubble after the hole filling can be initially filled. In the specific filling, the pixel value is changed from left to right to 255 according to the left-closed and right-open sequence [ line starting point, line ending point ].
Step1 is represented by convolution as follows:
the white pixel (original pixel value is 255) of the picture after the hole is filled is represented by pixel value 1, and the black pixel is represented by pixel value 0, so that the picture can be represented by a matrix M (only containing 0 and 1).
Traversing from left to right and from right to left, the row matrix [ 54321 ] can be convolved with the matrix M to obtain the matrix N, since the deconvolution is preceded, the deconvolved vector [ 12345 ] is slid row by row on the matrix M and the two values of the overlap are multiplied, all the products are summed, and then a new value is generated.
1. For a left-to-right traversal:
the value of multiplying the white black 10000 by 12345 is 1 in turn, and since the first black is marked as the line start point, the first position on the left of the matrix N where the multiplication result is 1 is the line start point position marked in the matrix M.
The multiplication value of 11110 and 12345 is 10 in turn, and since the marked black is the starting point of the row, the 2 nd position which should be right of the matrix N where the multiplication result is 10 is the starting point of the row marked in the matrix M.
The value of multiplying black, white and white, 01111, with 12345 is 14 in turn, and since the last white is marked as the starting point of the row, the 2 nd position to the right of the last multiplication result of 14 in matrix N is the starting point of the row marked in matrix M.
2. For right-to-left traversal:
the value of multiplying the value of 10000, which is black and white, by 12345 is 1, and since white is marked as the row end point, the 2 nd position on the left of the matrix N where the multiplication result is 1 is the row end point position marked in the matrix M.
The value of multiplying white, white and black, 11110, with 12345 is 10 in turn, and since the last white is marked as the row end point, the 1 st position to the right of the matrix N where the result of the multiplication was 10 is the row end point position marked in the matrix M.
The value of multiplying 12345 by 01111 in turn is 14, and since black is marked as the end point of the row, the 2 nd position to the left of the result 14 in matrix N should be the end point of the row marked in matrix M.
3. Primary filling:
the pixel values of the row starting points and the row ending points corresponding to all rows in the matrix M are all changed to 1 according to the left-closed and right-open modes, so that the matrix M2 is obtained.
Step2, last fill (vertical fill):
go through from a left side to the right and traverse in proper order from top to bottom and from bottom to top again, divide into two kinds of situations this moment, have the rupture zone or do not have near bubble edge.
For the case of traversing from top to bottom, if there is a crack region, the method is divided into two cases according to the distance between the crack region and the edge of the bubble:
(a5) one is to mark the first black pixel as the starting point of the column if the sequence of the pixels is white, black and black.
(a6) And the other is that if the sequence of the pixel points is white, white and black, the black pixel points are marked as the starting points of the columns.
And for the condition of traversing from top to bottom, if no cracked area exists, when the sequence of the pixel points is black, white and white, marking the last white pixel point as a column starting point.
For the traversal condition from bottom to top, if there is a crack region, the two conditions are divided according to the distance between the crack region and the bubble edge:
(a7) one is to mark the white pixel as the line termination point if the sequence of the pixel is white, black and black.
(a8) And the other is that if the sequence of the pixel points is white, white and black, the last white pixel point is marked as a column termination point.
And for the condition of traversing from bottom to top, if no cracked area exists, marking black pixel points as the row termination points when black, white and white appear.
And finally, changing the pixel values of the column starting points and the column ending points corresponding to all the columns into 1 according to a left-closed and right-opened mode, wherein the pixel value 1 represents white, and the pixel value 0 represents black, and converting the black into a picture with an improved filling algorithm.
Step2 is represented by convolution as follows:
going from top to bottom and from bottom to top, matrix M2 may be convolved with the column matrix [ 54321 ] to obtain matrix H, which is the deconvolved vector [ 12345 ] that slides column by column on matrix M2 because it was deconvolved first, and the two values of the overlap are multiplied, all the products summed, and then a new value is generated.
1. For a top-down traversal:
the value of multiplying 10000, black and white, with 12345 is 1 in turn, and since the first black is marked as the column start point, the first position to the left of the matrix H where the result of the multiplication is 1 is the column start point position marked in the matrix M.
The value of multiplying white, white and black, 11110, with 12345 is 10 in turn, and since the mark black is the starting point of the column, the 2 nd position that should be right of the matrix H where the result of the multiplication is 10 is the starting point of the column marked in the matrix M.
The value of multiplying black, white and white, 01111, with 12345 is 14 in turn, and since the last white is marked as the column start point, the 2 nd position to the right of the last white in matrix H, where the result of the multiplication is 14, is the column start point position marked in matrix M.
2. For a bottom-up traversal:
the value of multiplying the value of 10000, which is black and white, by 12345 is 1, and since the mark white is the column end point, the 2 nd position on the left of the matrix H where the multiplication result is 1 is the column end point position marked in the matrix M.
The value of multiplying white, white and black, 11110, with 12345 is 10 in turn, and since the last white is marked as the column end point, the 1 st position to the right of the matrix H where the result of the multiplication was 10 is the column end point position marked in the matrix M.
The value of multiplying 12345 by 01111 in turn is 14, and since black is marked as the column end point, the 2 nd position to the left of matrix H where the result of the multiplication was 14 should be the column end point position marked in matrix M.
And (3) processing after the steps of the operations 1 and 2 are finished: the column start point and the column end point are made to correspond to the respective columns, and the pixel value from the column start point to the column end point for each column becomes 255 (as shown in (d) of fig. 3), so that the bubble after the hole filling can be finally filled. The filling is according to left closed and right open, [ column start point, column end point).
3. And (3) final filling:
the pixel values of the column starting points and the column ending points corresponding to all columns in the matrix M2 are all changed to 1 according to the left-closed and right-open modes, so that the matrix M3 is obtained, the matrix M3 represents white according to the pixel value 1, and the pixel value 0 represents black, and the black is converted into a picture of the improved filling algorithm.
Summary (convolution representation):
the white pixel points of the picture after the hole filling are represented by pixel values 1, and the black pixel points are represented by pixel values 0, so that the picture can be represented by a matrix M. The convolution operation of Step1 and Step2 summarizes the overall process of the convolution operation:
1. convolving the row matrix [ 54321 ] with the matrix M to obtain a matrix N, traversing each row of the matrix N from left to right, recording the row number and the column number-1 at the moment as a row starting point when the pixel value is 1, and stopping traversing the row; when the pixel value is 10 or 14, the number of rows and the number of columns +2 at this time are recorded as a row starting point, and the traversal of this row is stopped. Then traversing from right to left, recording the number of rows and the number of columns-2 at the moment as a row end point when the pixel value is 1 or 14, and stopping traversing the row; when the pixel value is 10, the row number and column number +1 at this time are recorded as row end points, and the traversal of this row is stopped.
2. The pixel values of the row starting points and the row ending points corresponding to all rows in the matrix M are all changed to 1 according to the left-closed and right-open modes, so that the matrix M2 is obtained.
3. Convolving the matrix M2 with a column matrix [ 54321 ] to obtain a matrix H, traversing each column of the matrix H from top to bottom, recording the column number and the row number-1 at the moment as a column starting point when the pixel value is 1, and stopping the traversal of the column; when the pixel value is 10 or 14, the column number and row number at this time +2 are recorded as the column starting point, and the traversal of this column is stopped. Then, traversing from bottom to top, recording the number of columns and the number of rows-2 at the moment as a column termination point when the pixel value is 1 or 14, and stopping the traversing of the column; when the pixel value is 10, the column count and row count at this time +1 are recorded as the column end point, and the traversal of this column is stopped.
4. The pixel values of the column starting points and the column ending points corresponding to all columns in the matrix M2 are all changed to 1 according to the left-closed and right-open modes, so that the matrix M3 is obtained, the matrix M3 represents white according to the pixel value 1, and the pixel value 0 represents black, and the black is converted into a picture of the improved filling algorithm.
Thirdly, data processing:
the data processing part comprises the steps of extracting, analyzing and storing data, extracting coordinate information of the detection frame while performing target detection and target tracking, naming each detection frame region screenshot of each frame of picture and storing the screenshot in a file, performing traditional image processing on the detection frame region, obtaining the size of the area of the bubble occupying the area of the detection frame, and obtaining the offset of the centroid of the bubble relative to the coordinate of the lower left corner of the detection frame. The actual length represented by the length and the width of the picture can be converted by knowing the actual length of the calibration object and the ratio of the calibration object to the length of the picture, and finally the actual coordinate information and the area size of the bubble are converted.
And storing information of the abscissa and ordinate of the mass center of each bubble, the length and width of each bubble, the length-width ratio of each bubble and the area size in each frame of picture in an excel file, so that subsequent data processing is facilitated. And storing the abscissa and ordinate of the centroid of the bubble and the size of the area in a file by using a scattered point to represent the rule of the bubble. One-key operation, can be to video batch processing in a plurality of folders in the file, also can only handle to video in a certain folder, can show the information of this video place folder during processing, the operation is accomplished and can be generated required data message, can carry out data visualization again, look over traditional image processing's treatment effect, can see the original drawing, the gaussian fuzzy picture, sobel operator edge inspection drawing, the hole filling picture, the fracture filling algorithm picture, can see through the contrast improvement that the improvement algorithm was to bubble area size and barycenter coordinate position accuracy, and judge whether the gained data is accurate.
The target tracking can be used for tracking a plurality of bubbles, so that a unique ID can be assigned to each bubble, all data corresponding to the bubble can be obtained through the ID, and related data information is written into a title corresponding to an excel table, so that the experimental result is saved.
How to obtain information of the bubbles is described below:
x0 = max(bboxes[0] - 10, 0),
y0 = max(bboxes[1] - 10, 0),
x1 = min(bboxes[2] + 10, im0.shape[1]),
y1 = min(bboxes[3] + 10, im0.shape[0]),
a direct coordinate system u-v taking a pixel as a unit is established by taking the upper left corner of the image as an origin, and the abscissa u and the ordinate v of the pixel are respectively the number of columns and the number of rows in the image array (u corresponds to x and v corresponds to y in OpenCV). Wherein (x0, y0) is the coordinates of the upper left corner of the detection box, and (x 1, y 1) is the coordinates of the lower right corner of the detection box, which must satisfy that the coordinates of the upper left corner are not less than 0, and the coordinates of the small right corner are not more than (width, height) of the picture. The number 10 can be changed into a number not less than 0, the area outside the detection frame is intercepted through crop = im0[ y0: y1, x0: x1], the area is subjected to traditional image processing, the coordinates of the bubble centroid in the intercepted area and the bubble area are calculated, and the intercepted area is named and saved.
Obtaining a filled bubble picture through an improved hole filling algorithm, wherein a bubble area is white, a pixel value is 255, other areas are black, and the pixel value is 0, firstly obtaining the height and width of the picture, traversing from top to bottom, traversing from left to right for each line, if the pixel value is 255, enabling the total number sum _ white (initially 0) of bubble pixels to be increased by 1, enabling the length temp _ width (initially 0) of each line of white pixels to be increased by 1, enabling sum _ centroid _ x _ coordinate to be equal to the sum of the self and the corresponding column number, and taking the maximum value width of temp _ width in each line as the width of the bubble. And (5) the final sum _ white after traversing from top to bottom is the total number of the pixel points of the bubble. The result of dividing sum _ central _ x _ coordinate by sum _ white (a fraction of the specified number of bits may be retained) central _ x _ coordinate is taken as the centroid abscissa of the bubble on this cut-off picture.
And traversing from left to right, for each column from top to bottom, if the pixel value is 255, enabling the length temp _ height (initially 0) of each column of white pixels to be increased by 1, enabling sum _ centroid _ y _ coordinate to be equal to the sum of the height of the column and the corresponding line number, and taking the maximum height of temp _ height in each column as the height of the bubble. After going through from left to right, sum _ centroid _ y _ coordinate divides sum _ white in the previous step (a decimal of a specified number of bits can be reserved) centroid _ y _ coordinate is used as the centroid ordinate of the bubble on the cut picture.
And finally, returning the total number of the pixel points, namely the area of the bubble, the width and the height of the bubble and the abscissa and the ordinate of the centroid of the bubble.
The centroid coordinate of the bubble on the whole picture can be obtained through the coordinate of the upper left corner of the detection frame, the number of pixel points expanded when the picture of the detection frame is intercepted and the centroid coordinate of the bubble, the area of the bubble is sum _ white, the width of the bubble is width, and the height of the bubble is height.
How to calculate the leak amount of the sealed container from the acquired bubble information is described below:
in static liquid, if gas gravity G is not counted, the bubbles are mainly subjected to three acting forces in the vertical direction, buoyancy Fb is respectively applied to the bubbles by the liquid, and drag force F is applied to the bubbles when the bubbles move in a flow field When the bubbles do acceleration upward movement in the flow field, the bubbles drive the peripheral fluid to accelerate to generate virtual mass force F The concrete stress analysis is shown in fig. 4, and the expression of the three forces is as follows:
Figure 775950DEST_PATH_IMAGE006
(2),
in formula (2), ρ is the liquid density; g is the acceleration of gravity; v is the bubble volume; is the coefficient of fluid resistance; represents the flow rate of the liquid; represents the flow rate of the gas, i.e., the moving speed of the bubbles;the area of the windward side of the bubble. And establishing a bubble stress balance equation according to a Newton second law as shown in a formula (3):
ma= +F +F (3),
the calculation formula of the volume of the obtained bubbles after finishing is shown as formula (4):
Figure 443692DEST_PATH_IMAGE007
(4),
wherein the content of the first and second substances,
Figure 706046DEST_PATH_IMAGE008
is the acceleration of the bubble in the vertical direction.
In order to calculate the bubble volume in the gas-liquid two-phase flow field, the corresponding speed and acceleration are obtained by processing the bubble mass center coordinate data with specific numbers, and are brought into the subsequent bubble volume calculation formula (4) together with the bubble area data to calculate the volume of each bubble, and the volume of each bubble with each number is accumulated to obtain the leakage amount of the sealed container.
The method combines deep learning, traditional image processing and data processing into a whole, the leakage amount of the sealed container can be accurately calculated only by running the bubble leakage video of the sealed container once, the data is stored in an excel table, and the detection frame area picture is intercepted, named and stored in a file, so that the improved hole filling algorithm improvement effect and whether the obtained data is accurate or not can be conveniently checked by carrying out the traditional image processing subsequently.
The embodiments of the present invention are described only for the preferred embodiments of the present invention, and not for the limitation of the concept and scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the design concept of the present invention shall fall into the protection scope of the present invention, and the technical content of the present invention which is claimed is fully set forth in the claims.

Claims (6)

1. The method for measuring the leakage quantity of the sealed container based on deep learning and image processing is characterized by comprising the following steps of:
step1, acquiring an image when bubbles are generated in liquid when a sealed container leaks;
step2, taking the image collected in the step1 as input, adopting YOLOv5 to perform feature extraction, and adopting a target tracking algorithm to track each bubble in the image as a target, thereby obtaining an intercepting frame picture of each frame image of each bubble in the image;
step 3, performing image processing on the captured frame pictures of the frame images of each bubble obtained in the step2 to obtain geometric data and coordinate data of each bubble;
step 4, calculating the volume of each bubble by using the following formula based on the geometric data and the coordinate data of each bubble obtained in the step 3
Figure 161968DEST_PATH_IMAGE001
The formula is as follows:
Figure 800148DEST_PATH_IMAGE002
wherein the content of the first and second substances,v the flow velocity of the gas is represented, namely the movement velocity of the bubble is determined according to the coordinate data of the bubble;
the coefficient of fluid resistance is determined according to the liquid medium in which the sealed container is positioned;
s is the area of the windward side of the bubble and is determined according to the geometric data of the bubble obtained in the step 3;
Figure 48726DEST_PATH_IMAGE003
is the acceleration of gravity;
Figure 912777DEST_PATH_IMAGE004
acceleration of the bubble in the vertical direction;
step 5, the volumes of all the air bubbles obtained in the step 4
Figure 195991DEST_PATH_IMAGE001
And adding the obtained products to obtain the leakage amount of the sealed container.
2. The method for measuring the leakage of the sealed container based on the deep learning and the image processing as claimed in claim 1, wherein step2 adopts YOLOv5, and the original CSP convolution kernel in YOLOv5 is replaced by an asymmetric convolution block;
the asymmetric convolution block includes three convolution kernels, namely a3 × 3 × c convolution kernel, a3 × 1 × c convolution kernel, and a1 × 3 × c convolution kernel, wherein:
the 3 x 3 xc convolution kernel is a regular convolution, and the 3 x 3 xc convolution kernel is used for extracting the basic characteristics of bubbles in the image;
the 3 × 1 × c convolution kernel and the 1 × 3 × c convolution kernel are respectively vertical and horizontal convolution kernels, which respectively correspond to the extraction of the longitudinal and horizontal features of the bubble in the image, and both of the two convolution kernels can extract the position feature and the rotation feature of the target.
3. The method for measuring the leakage quantity of the sealed container based on the deep learning and the image processing as claimed in claim 1, wherein in the step2, a deepsort algorithm is adopted for target tracking.
4. The method for measuring the leakage amount of the sealed container based on the deep learning and the image processing as claimed in claim 1, wherein in the step 3, when the image processing is performed on the truncated frame image of each bubble obtained in the step2, a binary image is obtained after the gaussian blur processing, the sobel operator edge extraction processing and the hole filling processing are sequentially performed, the bubble part in the binary image is composed of white pixel points, the rest part is composed of black pixel points, and then the cracking filling processing is performed, wherein the cracking filling processing process comprises the following steps:
step1, initial filling:
go up from last down to the pixel in the binary image earlier and traverse, follow in proper order again and turn right, turn left order from a left side and traverse from the right side, judge during the traversal whether there is the region that splits in the binary image, wherein:
in the process of going from left to right, if the crack area exists, the crack area is separated from the edge of the bubble by a distance, and the two conditions are divided into two conditions:
(a1) if the sequence of the pixel points is white, black and black, marking the first black pixel point as a line starting point as S1;
(a2) if the sequence of the pixel points is white, white and black, marking the black pixel points as the line starting points, and recording as S2;
during the process of going from left to right, if no cracked area exists, if the sequence of the pixel points is black, white and white, marking the last white pixel point as a line starting point, and recording as S3;
in the process of going from right to left, if a crack area exists, the crack area is separated from the edge of the bubble by a distance, and the two conditions are divided into two conditions:
(a3) if the sequence of the pixel points is white, black and black, marking the white pixel points as line termination points, and marking the white pixel points as E1;
(a4) if the sequence of the pixel points is white, white and black, marking the last white pixel point as a line termination point, and marking as E2;
during the process of going from right to left, if no crack area exists, if the sequence of the pixel points is black, white and white, marking the last white pixel point as a line termination point, and recording as E3;
after all traversal and marking, enabling the line starting point and the line ending point to correspond to corresponding lines, changing the pixel value from the line starting point to the line ending point of each line into 255, and finishing the initial filling of the bubbles after the hole filling;
step2, final filling:
after filling according to Step1, traversing pixel points from left to right, and then sequentially traversing according to the sequence from top to bottom and from bottom to top, wherein:
in the process of going from top to bottom, if a crack area exists, the two conditions are divided according to the distance between the crack area and the edge of the bubble:
(b1) if the sequence of the pixel points is white, black and black, marking the first black pixel point as a column starting point;
(b2) if the sequence of the pixel points is white, white and black, marking the black pixel points as the starting points of the columns;
when the sequence of the pixel points is black, white and white when the cracking area does not exist in the process of going from top to bottom, marking the last white pixel point as a column starting point;
when from down up traversing the condition, if there is the rupture zone, be apart from the bubble edge distance according to rupture zone, divide into two kinds of situations:
(b3) if the sequence of the pixel points is white, black and black, marking the white pixel points as the row termination points;
(b4) if the sequence of the pixel points is white, white and black, marking the last white pixel point as a column termination point;
when the condition of traversing from bottom to top is met, if no cracked area exists, if the sequence of the pixel points is black, white and white, marking the black pixel points as row termination points;
finally, the pixel values of the starting points and the ending points of the columns corresponding to all the columns are changed into 1 according to a left-closed and right-opened mode, the pixels represent white according to the pixel values 1, and the pixels represent black and are converted into pictures of the improved filling algorithm;
after the whole image processing process is completed, geometric data and coordinate data of each bubble are obtained based on the pixel points.
5. The method for measuring the leakage amount of the sealed container based on the deep learning and the image processing as claimed in claim 4, wherein the ksize of the Gaussian kernel in the Gaussian blur processing in the step 4 is (7, 7).
6. The method for measuring leakage of sealed container based on deep learning and image processing as claimed in claim 4, wherein the ksize of sobel operator is (3,3) when sobel operator extracts edge processing in step 4.
CN202210809336.7A 2022-07-11 2022-07-11 Sealed container leakage amount measuring method based on deep learning and image processing Pending CN114882101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210809336.7A CN114882101A (en) 2022-07-11 2022-07-11 Sealed container leakage amount measuring method based on deep learning and image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210809336.7A CN114882101A (en) 2022-07-11 2022-07-11 Sealed container leakage amount measuring method based on deep learning and image processing

Publications (1)

Publication Number Publication Date
CN114882101A true CN114882101A (en) 2022-08-09

Family

ID=82683009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210809336.7A Pending CN114882101A (en) 2022-07-11 2022-07-11 Sealed container leakage amount measuring method based on deep learning and image processing

Country Status (1)

Country Link
CN (1) CN114882101A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003130753A (en) * 2001-10-23 2003-05-08 Kyosan Denki Co Ltd Work airtightness inspection device and method therefor
CN104535275A (en) * 2014-12-11 2015-04-22 天津大学 Underwater gas leakage amount detection method and device based on bubble acoustics
CN110415257A (en) * 2019-07-23 2019-11-05 东南大学 A kind of biphase gas and liquid flow overlapping bubble image partition method
CN112288770A (en) * 2020-09-25 2021-01-29 航天科工深圳(集团)有限公司 Video real-time multi-target detection and tracking method and device based on deep learning
CN112686923A (en) * 2020-12-31 2021-04-20 浙江航天恒嘉数据科技有限公司 Target tracking method and system based on double-stage convolutional neural network
CN113139442A (en) * 2021-04-07 2021-07-20 青岛以萨数据技术有限公司 Image tracking method and device, storage medium and electronic equipment
CN113833583A (en) * 2021-06-28 2021-12-24 北京航天动力研究所 Device and method for detecting leakage amount of gas tightness
CN113838089A (en) * 2021-09-20 2021-12-24 哈尔滨工程大学 Bubble trajectory tracking method based on feature matching algorithm
CN114677554A (en) * 2022-02-25 2022-06-28 华东理工大学 Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003130753A (en) * 2001-10-23 2003-05-08 Kyosan Denki Co Ltd Work airtightness inspection device and method therefor
CN104535275A (en) * 2014-12-11 2015-04-22 天津大学 Underwater gas leakage amount detection method and device based on bubble acoustics
CN110415257A (en) * 2019-07-23 2019-11-05 东南大学 A kind of biphase gas and liquid flow overlapping bubble image partition method
CN112288770A (en) * 2020-09-25 2021-01-29 航天科工深圳(集团)有限公司 Video real-time multi-target detection and tracking method and device based on deep learning
CN112686923A (en) * 2020-12-31 2021-04-20 浙江航天恒嘉数据科技有限公司 Target tracking method and system based on double-stage convolutional neural network
CN113139442A (en) * 2021-04-07 2021-07-20 青岛以萨数据技术有限公司 Image tracking method and device, storage medium and electronic equipment
CN113833583A (en) * 2021-06-28 2021-12-24 北京航天动力研究所 Device and method for detecting leakage amount of gas tightness
CN113838089A (en) * 2021-09-20 2021-12-24 哈尔滨工程大学 Bubble trajectory tracking method based on feature matching algorithm
CN114677554A (en) * 2022-02-25 2022-06-28 华东理工大学 Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHENGLONG DING ET.AL: "A Measurement System for the Tightness of Sealed Vessels Based on Machine Vision Using Deep Learning Algorithm", 《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》 *

Similar Documents

Publication Publication Date Title
CN111814867B (en) Training method of defect detection model, defect detection method and related device
CN113450307B (en) Product edge defect detection method
CN111179229B (en) Industrial CT defect detection method based on deep learning
CN111474184B (en) AOI character defect detection method and device based on industrial machine vision
CN109978839B (en) Method for detecting wafer low-texture defects
CN104990926B (en) A kind of TR elements positioning of view-based access control model and defect inspection method
CN110738642A (en) Mask R-CNN-based reinforced concrete crack identification and measurement method and storage medium
CN107833238A (en) Largest connected field mark method, method for tracking target, augmented reality/virtual reality device
CN113838089B (en) Bubble track tracking method based on feature matching algorithm
CN110415257A (en) A kind of biphase gas and liquid flow overlapping bubble image partition method
CN105005969B (en) A kind of bill images alter detection method and system
CN110443299B (en) Automatic ore drawing experiment method and system based on image recognition
CN109523549B (en) Air leakage area detection method for pressure vessel air tightness test
CN111027538A (en) Container detection method based on instance segmentation model
CN104966348B (en) A kind of bill images key element integrality detection method and system
CN113706464A (en) Printed matter appearance quality detection method and system
CN112001841A (en) Image to-be-detected region extraction method and device and product defect detection system
CN115222736B (en) Steel pipe production quality detection method based on Hough space
CN112766136A (en) Space parking space detection method based on deep learning
CN116485735A (en) Defect detection method, defect detection device, electronic equipment and computer readable storage medium
CN113673541A (en) Image sample generation method for target detection and application
CN113240716A (en) Twin network target tracking method and system with multi-feature fusion
CN112308040A (en) River sewage outlet detection method and system based on high-definition images
CN106447656A (en) Rendering flawed image detection method based on image recognition
CN114943833A (en) Bubble identification image processing method for bubble flow in gas-liquid reactor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220809