CN116844142B - Bridge foundation scouring identification and assessment method - Google Patents
Bridge foundation scouring identification and assessment method Download PDFInfo
- Publication number
- CN116844142B CN116844142B CN202311087803.0A CN202311087803A CN116844142B CN 116844142 B CN116844142 B CN 116844142B CN 202311087803 A CN202311087803 A CN 202311087803A CN 116844142 B CN116844142 B CN 116844142B
- Authority
- CN
- China
- Prior art keywords
- dimensional image
- image
- preprocessed
- local features
- flushing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000009991 scouring Methods 0.000 title claims abstract description 30
- 238000001514 detection method Methods 0.000 claims abstract description 19
- 238000012795 verification Methods 0.000 claims abstract description 8
- 238000011156 evaluation Methods 0.000 claims abstract description 7
- 230000002159 abnormal effect Effects 0.000 claims abstract description 4
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000011010 flushing procedure Methods 0.000 claims description 34
- 238000001914 filtration Methods 0.000 claims description 9
- 238000007500 overflow downdraw method Methods 0.000 claims description 6
- 238000013450 outlier detection Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 235000002566 Capsicum Nutrition 0.000 claims description 3
- 239000006002 Pepper Substances 0.000 claims description 3
- 235000016761 Piper aduncum Nutrition 0.000 claims description 3
- 235000017804 Piper guineense Nutrition 0.000 claims description 3
- 235000008184 Piper nigrum Nutrition 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 150000003839 salts Chemical class 0.000 claims description 3
- 244000203593 Piper nigrum Species 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract 1
- 238000012360 testing method Methods 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 241000722363 Piper Species 0.000 description 2
- 230000009189 diving Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- FAPWRFPIFSIZLT-UHFFFAOYSA-M Sodium chloride Chemical compound [Na+].[Cl-] FAPWRFPIFSIZLT-UHFFFAOYSA-M 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000011780 sodium chloride Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for identifying and evaluating bridge foundation scouring, which comprises the following steps: collecting a multi-beam three-dimensional image and a geological radar two-dimensional image, and preprocessing the multi-beam three-dimensional image and the geological radar two-dimensional image to obtain a preprocessed multi-beam three-dimensional image and a preprocessed geological radar two-dimensional image; carrying out local feature extraction on the preprocessed multi-beam three-dimensional image and the geological radar two-dimensional image to obtain local features in the image; matching the local features in the image, removing the features with poor matching and abnormal values, and completing geometric verification to obtain the matched local features; fusing the matched local features to obtain a fused new image; the new fused image is analyzed and evaluated to determine whether the bridge foundation has a scouring phenomenon, if so, the scouring degree of the bridge foundation is evaluated to finish the identification and evaluation of the bridge foundation scouring, and the method solves the problem that the existing bridge foundation scouring detection method is low in precision and accuracy.
Description
Technical Field
The invention relates to the technical field of bridge safety monitoring, in particular to a method for identifying and evaluating bridge foundation scouring.
Background
The flushing of the bridge foundation is one of the common problems in the use process of the bridge, and if the bridge foundation cannot be found and treated in time, the bridge foundation can be extremely risky and dangerous to use.
The common bridge foundation scouring detection method comprises the following steps: 1. detecting a probe rod: the test is faster and the cost is lower, but the section of the river bed and the characteristics of the rock stratum below the river bed cannot be accurately given. 2. Diving detection: the test is faster, but requires diving-related experience and has high safety requirements. 3. Sonar detection: continuous measurements can be made, but require the antenna to be immersed in water, the detection equipment is relatively expensive, the detection data can be affected by test environmental noise, and the formation properties below the riverbed cannot be tested. 4. And (3) seismic wave reflection detection: the section of the river bed can be continuously and accurately recorded, and the characteristics of the rock stratum below the river bed are given, but the antenna is required to be immersed in water, the detection equipment is relatively expensive, and the detection data can be influenced by environmental noise. 5. Geological radar detection: the method has the advantages of high precision, visual image, high speed, flexible field work and the like, can continuously and accurately record the section of the river bed, and simultaneously gives out the characteristics of the stratum under the river bed, but the detection equipment is relatively expensive, the detection data can be influenced by noise, and the method can not be used in saline water. Therefore, there is a need for further research and improvement in bridge foundation scour detection methods to more accurately, reliably, and efficiently evaluate bridge foundation scour.
Disclosure of Invention
Aiming at the defects in the prior art, the method for identifying and evaluating the bridge foundation scouring solves the problem that the existing method for detecting the bridge foundation scouring is low in precision and accuracy.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: the method for identifying and evaluating the bridge foundation scouring comprises the following steps:
s1, acquiring a multi-beam three-dimensional image and a geological radar two-dimensional image, and respectively preprocessing the multi-beam three-dimensional image and the geological radar two-dimensional image to obtain a preprocessed three-dimensional image and a preprocessed two-dimensional image;
s2, respectively extracting local features of the preprocessed multi-beam three-dimensional image and the preprocessed geological radar two-dimensional image to obtain local features in the three-dimensional image and the geological radar two-dimensional image;
s3, matching local features in the three-dimensional image and the two-dimensional image, removing features with poor matching and abnormal values, and completing geometric verification to obtain matched local features;
s4, fusing the matched local features to obtain a fused new image;
and S5, analyzing and evaluating the new fused image, and evaluating the scouring degree of the bridge foundation when the scouring phenomenon of the bridge foundation in the new fused image is recognized, so as to finish the bridge foundation scouring recognition and evaluation.
Further: the step S1 comprises the following sub-steps:
s11, denoising the multi-beam three-dimensional image and the geological radar two-dimensional image to obtain a denoised three-dimensional image and a denoised two-dimensional image;
s12, removing salt and pepper noise in the three-dimensional image and the two-dimensional image after noise removal by adopting median filtering, and obtaining the three-dimensional image and the two-dimensional image after median filtering;
s13, correcting the three-dimensional image and the two-dimensional image after the median filtering to obtain a corrected three-dimensional image and a corrected two-dimensional image;
s14, registering the corrected three-dimensional image and the corrected two-dimensional image to enable the corrected three-dimensional image and the corrected two-dimensional image to be aligned in space, and obtaining a registered three-dimensional image and a registered two-dimensional image;
and S15, detecting the quality of the three-dimensional image and the two-dimensional image after registration, and obtaining the preprocessed three-dimensional image and the preprocessed two-dimensional image when the quality standard is reached.
Further: the quality detection in step S15 includes image outlier detection and image missing data detection.
Further: the step S2 comprises the following sub-steps:
s21, detecting key points in the preprocessed three-dimensional image and the preprocessed two-dimensional image by using a Gaussian difference operator and a Gaussian blur function, and calculating a local feature descriptor of each key point;
s22, calculating the matching degree of the local feature descriptors in the preprocessed three-dimensional image and the preprocessed two-dimensional image by adopting the distance measure, and obtaining the local features in the three-dimensional image and the preprocessed two-dimensional image.
Further: the step S3 comprises the following sub-steps:
s31, matching the local features in the image by adopting Euclidean distance as a matching method to obtain the initially matched local features, wherein the formula is as follows:
wherein,is the first three-dimensional image after pretreatmentiLocal feature descriptor of each key point, +.>Is the first two-dimensional image after pretreatmentjThe local feature descriptors of the individual keypoints,mas a dimension of the feature descriptor,dist(.) is a Euclidean distance function,kfor the number of matches, +.>And->Respectively the first in the three-dimensional imageiThe local feature descriptor of each key pointkIndividual element and the first two-dimensional imagejThe local feature descriptor of each key pointkAn element;
s32, performing outlier rejection on the preliminarily matched local features by adopting a RANSAC algorithm to obtain local features with outlier rejection;
s33, performing geometric verification on the local features with outlier removed by using R-Hough transformation, and completing matching of the local features to obtain matched local features.
Further: the step S4 adopts a fusion method based on a weighted average method, different weights are given to different features according to the importance or credibility of the different features in the matched local features, a new fused image is obtained, and the mathematical expression of the fusion method based on the weighted average method is as follows:
wherein,for the fused image pixel value, +.>Is the firstbThe picture is +.>The pixel value at which it is located,is the firstbThe picture is +.>The weight value of the position is calculated,bin order to count the number of the marks,nis the number of images to be fused.
Further: step S5, determining whether a scouring phenomenon exists or not by comparing pixel values of different areas in the fused new image;
if the difference value between the pixel value of a certain area and the pixel value of the surrounding area is higher than a preset value, the area has a scouring phenomenon, and the scouring degree is evaluated;
if the difference between the pixel value of a certain area and the pixel value of the surrounding area is not higher than the preset value, the area is not flushed.
Further: the method for evaluating the flushing degree of the bridge comprises the following steps:
s51, calculating the average value of pixels in a flushing area in the new fused image, comparing the average value with the average value of pixels in a normal area, and evaluating the severity of flushing;
s52, performing binarization processing on the new fused image by adopting a threshold method, dividing a flushing area in the new fused image, and determining a flushing range;
and S53, taking the severity degree of the flushing and the flushing range as flushing degrees, and finishing the evaluation of the flushing degrees.
The beneficial effects of the invention are as follows:
drawings
FIG. 1 is a flow chart of a method for identifying and evaluating a bridge foundation scour according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, in one embodiment of the present invention, there is provided a method for identifying and evaluating a bridge foundation scour, comprising the steps of:
s1, acquiring a multi-beam three-dimensional image and a geological radar two-dimensional image, and respectively preprocessing the multi-beam three-dimensional image and the geological radar two-dimensional image to obtain a preprocessed three-dimensional image and a preprocessed two-dimensional image;
s2, respectively extracting local features of the preprocessed multi-beam three-dimensional image and the preprocessed geological radar two-dimensional image to obtain local features in the three-dimensional image and the geological radar two-dimensional image;
s3, matching local features in the three-dimensional image and the two-dimensional image, removing features with poor matching and abnormal values, and completing geometric verification to obtain matched local features;
s4, fusing the matched local features to obtain a fused new image;
and S5, analyzing and evaluating the new fused image, and evaluating the scouring degree of the bridge foundation when the scouring phenomenon of the bridge foundation in the new fused image is recognized, so as to finish the bridge foundation scouring recognition and evaluation.
In this embodiment, the step S1 includes the following sub-steps:
s11, denoising the multi-beam three-dimensional image and the geological radar two-dimensional image to obtain a denoised three-dimensional image and a denoised two-dimensional image;
denoising formula:
wherein,representing denoised image pixel values, < >>Representing the original image pixel value, is->Representing noise;
s12, removing salt and pepper noise in the three-dimensional image and the two-dimensional image after noise removal by adopting median filtering, and obtaining the three-dimensional image and the two-dimensional image after median filtering;
median filter formula:
wherein,representing the median filtered image pixel values,f(x-k:x+k,y-k:y+k) Expressed as%x,y) Is centralk×kIs used to determine the neighborhood pixel values of (c),median(.) the neighborhood pixel values are sequenced and then the intermediate value is taken;
s13, correcting the three-dimensional image and the two-dimensional image after the median filtering to obtain a corrected three-dimensional image and a corrected two-dimensional image;
correction formula:
wherein,I_correctedrepresenting the corrected image pixel values,darkandwhiterespectively representing black and white reference standards, and max and min respectively representing the maximum value and the minimum value of the corrected pixel value range;
s14, registering the corrected three-dimensional image and the corrected two-dimensional image to enable the corrected three-dimensional image and the corrected two-dimensional image to be aligned in space, and obtaining a registered three-dimensional image and a registered two-dimensional image;
registration formula:
wherein,T(x,y) Representing the registered image pixel values,Mthe rotation and scaling matrix is represented and,T 0 representing a translation matrix;
s15, detecting the quality of the three-dimensional image and the two-dimensional image after registration, and obtaining a preprocessed three-dimensional image and a preprocessed two-dimensional image when the quality standard is reached;
the quality detection in step S15 includes image outlier detection and image missing data detection,
outlier detection algorithm is adopted for detecting the outlier of the image, and the formula is as follows:
Z-score=(g-μ)/σ
wherein,gis the gray value of the pixel and,μandσthe average value and standard deviation of all pixels in the image; if the pixel isZ-scoreAbove a certain threshold, it is considered an outlier;
in a two-dimensional image of a geological radar, an outlier may also be detected by a similar method, for example, calculating the variance of pixel values around each pixel, and if the variance of pixel values around a certain pixel is large, it is considered as an outlier;
in a multi-beam three-dimensional image, the missing data generally refers to an area where the depth value is not recorded. In this embodiment, an interpolation method is adopted to fill up missing data, such as bilinear interpolation, cubic spline interpolation, and the like; before interpolation, the position of the missing data needs to be detected, and common methods include searching for a pixel with a pixel value of 0 and searching for a pixel with a pixel value of 0NaNIs a pixel of (1);
wherein,NaNindicating missing data, i.e. unusable values.
In two-dimensional images of geological radar, missing data generally refers to areas where the signal intensity is too low or too high to obtain valid data, and similar methods can be used to detect the location of the missing data, for example, to find pixels with pixel values of 0 or outside a reasonable range.
In this embodiment, the step S2 includes the following sub-steps:
s21, detecting key points in the preprocessed three-dimensional image and the preprocessed two-dimensional image by using a Gaussian difference operator and a Gaussian blur function, and calculating a local feature descriptor of each key point;
s22, calculating the matching degree of the local feature descriptors in the preprocessed three-dimensional image and the preprocessed two-dimensional image by adopting the distance measure, and obtaining the local features in the three-dimensional image and the preprocessed two-dimensional image.
In this embodiment, the step S3 includes the following sub-steps:
s31, matching the local features in the image by adopting Euclidean distance as a matching method to obtain the initially matched local features, wherein the formula is as follows:
wherein,is the first three-dimensional image after pretreatmentiLocal feature descriptor of each key point, +.>Is the first two-dimensional image after pretreatmentjThe local feature descriptors of the individual keypoints,mas a dimension of the feature descriptor,dist(.) is a Euclidean distance function,kfor the number of matches, +.>And->Respectively the first in the three-dimensional imageiThe local feature descriptor of each key pointkIndividual element and the first two-dimensional imagejThe local feature descriptor of each key pointkAn element;
s32, performing outlier rejection on the preliminarily matched local features by adopting a RANSAC algorithm to obtain local features with outlier rejection;
the specific RANSAC algorithm steps are as follows:
s3201, randomly selecting a group of sample points, and fitting a model by using the points;
s3202, calculating the distance from the rest points to the model, wherein the points with the distance smaller than a threshold value are called inner points;
s3203, if the number of the interior points is larger than a certain proportion of total points, the current model is reliable, and a better model can be re-fitted by using all the interior points;
repeating steps S3201-S3203 until the preset iteration times are reached, and fitting all internal points to a final model;
s33, performing geometric verification on local features with outlier removed by using R-Hough transformation, and completing matching of the local features to obtain matched local features;
in matching keypoints in two images, it is necessary to determine whether a certain geometric relationship is satisfied between the keypoints. In order to realize geometric verification, R-Hough transformation can be adopted, namely an accumulator is utilized to record the geometric relationship between key points in two images; specifically, for each keypoint, we can calculate some geometrical relationships between other keypoints, such as distance, angle, rotation, etc. We can then translate these geometric relationships to one point in the accumulator and in the R-Hough transform, one point for each line in the parameter space, so we can translate these geometric relationships to one point in the parameter space. The accumulator is a data structure that records these points and can be used to find the most frequently occurring straight line parameter values, i.e. the points corresponding to the strongest peaks.
In a specific implementation, a straight line can be converted to a point in the parameter space by the following formula:
wherein,and->The distance and the angle of the straight line are respectively,xandyrespectively representing the coordinates of a point on a straight line, in this embodiment we set a discrete set of +.>And->Values, each point is converted to a discrete point in the nearest parameter space, and then the count value of the corresponding point is incremented by one in the accumulator.
In this embodiment, the step S4 adopts a fusion method based on a weighted average method, and gives different weights to different features according to importance or credibility of the different features in the matched local features, so as to obtain a new fused image, where the mathematical expression of the fusion method based on the weighted average method is as follows:
wherein,for the fused image pixel value, +.>Is the firstbThe picture is +.>The pixel value at which it is located,is the firstbThe picture is +.>The weight value of the position is calculated,bin order to count the number of the marks,nis the number of images to be fused.
In this embodiment, the step S5 determines whether a scour phenomenon exists by comparing pixel values of different regions in the fused new image;
if the difference value between the pixel value of a certain area and the pixel value of the surrounding area is higher than a preset value, the area has a scouring phenomenon, and the scouring degree is evaluated;
if the difference value between the pixel value of a certain area and the pixel value of the surrounding area is not higher than a preset value, the area is not flushed;
the method for evaluating the flushing degree of the bridge comprises the following steps:
s51, calculating the average value of pixels in a flushing area in the new fused image, comparing the average value with the average value of pixels in a normal area, and evaluating the severity of flushing;
s52, performing binarization processing on the new fused image by adopting a threshold method, dividing a flushing area in the new fused image, and determining a flushing range;
the threshold method formula is:
wherein,Has a result of the threshold value being set,sigmais the standard deviation of the image and,dis the average value of the image and,ais an adjustable parameter and is adjusted according to actual conditions.
And S53, taking the severity degree of the flushing and the flushing range as flushing degrees, and finishing the evaluation of the flushing degrees.
Calculating pixel averages for scour areasm 1 And pixel average value of surrounding normal aream 2 The severity of the flushing degree is evaluated according to the difference value between the two, the larger difference value represents the more serious flushing degree,
the difference formula is:
wherein,deltato flush the pixel mean point difference of the region from the surrounding normal region,m 1 to flush out the pixel average value of the region,m 2 is the average value of the pixels in the surrounding normal area.
In the description of the present invention, it should be understood that the terms "center," "thickness," "upper," "lower," "horizontal," "top," "bottom," "inner," "outer," "radial," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be interpreted as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defined as "first," "second," "third," or the like, may explicitly or implicitly include one or more such feature.
Claims (6)
1. The method for identifying and evaluating the bridge foundation scouring is characterized by comprising the following steps of:
s1, acquiring a multi-beam three-dimensional image and a geological radar two-dimensional image, and respectively preprocessing the multi-beam three-dimensional image and the geological radar two-dimensional image to obtain a preprocessed three-dimensional image and a preprocessed two-dimensional image;
s2, respectively extracting local features of the preprocessed multi-beam three-dimensional image and the preprocessed geological radar two-dimensional image to obtain local features in the three-dimensional image and the geological radar two-dimensional image;
s3, matching local features in the three-dimensional image and the two-dimensional image, removing features with poor matching and abnormal values, and completing geometric verification to obtain matched local features;
s4, fusing the matched local features to obtain a fused new image;
s5, analyzing and evaluating the new fused image, and evaluating the scouring degree of the bridge foundation when the scouring phenomenon of the bridge foundation in the new fused image is recognized, so as to finish the bridge foundation scouring recognition and evaluation;
step S5, determining whether a scouring phenomenon exists or not by comparing pixel values of different areas in the fused new image;
if the difference value between the pixel value of a certain area and the pixel value of the surrounding area is higher than a preset value, the area has a scouring phenomenon, and the scouring degree is evaluated;
if the difference value between the pixel value of a certain area and the pixel value of the surrounding area is not higher than a preset value, the area is not flushed;
the method for evaluating the flushing degree of the bridge comprises the following steps:
s51, calculating the average value of pixels in a flushing area in the new fused image, comparing the average value with the average value of pixels in a normal area, and evaluating the severity of flushing;
s52, performing binarization processing on the new fused image by adopting a threshold method, dividing a flushing area in the new fused image, and determining a flushing range;
and S53, taking the severity degree of the flushing and the flushing range as flushing degrees, and finishing the evaluation of the flushing degrees.
2. The method for identifying and evaluating a bridge foundation flushing according to claim 1, characterized in that said step S1 comprises the following sub-steps:
s11, denoising the multi-beam three-dimensional image and the geological radar two-dimensional image to obtain a denoised three-dimensional image and a denoised two-dimensional image;
s12, removing salt and pepper noise in the three-dimensional image and the two-dimensional image after noise removal by adopting median filtering, and obtaining the three-dimensional image and the two-dimensional image after median filtering;
s13, correcting the three-dimensional image and the two-dimensional image after the median filtering to obtain a corrected three-dimensional image and a corrected two-dimensional image;
s14, registering the corrected three-dimensional image and the corrected two-dimensional image to enable the corrected three-dimensional image and the corrected two-dimensional image to be aligned in space, and obtaining a registered three-dimensional image and a registered two-dimensional image;
and S15, detecting the quality of the three-dimensional image and the two-dimensional image after registration, and obtaining the preprocessed three-dimensional image and the preprocessed two-dimensional image when the quality standard is reached.
3. The method for identifying and evaluating a bridge foundation flushing according to claim 2, wherein the quality detection in step S15 includes an image outlier detection and an image missing data detection.
4. The method for identifying and evaluating a bridge foundation flushing according to claim 1, characterized in that said step S2 comprises the following sub-steps:
s21, detecting key points in the preprocessed three-dimensional image and the preprocessed two-dimensional image by using a Gaussian difference operator and a Gaussian blur function, and calculating a local feature descriptor of each key point;
s22, calculating the matching degree of the local feature descriptors in the preprocessed three-dimensional image and the preprocessed two-dimensional image by adopting the distance measure, and obtaining the local features in the three-dimensional image and the preprocessed two-dimensional image.
5. The method for identifying and evaluating a bridge foundation flushing according to claim 1, characterized in that said step S3 comprises the following sub-steps:
s31, matching the local features in the image by adopting Euclidean distance as a matching method to obtain the initially matched local features, wherein the formula is as follows:
wherein,is the first three-dimensional image after pretreatmentiLocal feature descriptor of each key point, +.>Is the first two-dimensional image after pretreatmentjThe local feature descriptors of the individual keypoints,mas a dimension of the feature descriptor,dist(.) is a Euclidean distance function,kfor the number of matches, +.>And->Respectively the first in the three-dimensional imageiThe local feature descriptor of each key pointkIndividual elements and twoThe first dimension of the dimension imagejThe local feature descriptor of each key pointkAn element;
s32, performing outlier rejection on the preliminarily matched local features by adopting a RANSAC algorithm to obtain local features with outlier rejection;
s33, performing geometric verification on the local features with outlier removed by using R-Hough transformation, and completing matching of the local features to obtain matched local features.
6. The method of identifying and assessing a bridge foundation scour of claim 1, wherein: the step S4 adopts a fusion method based on a weighted average method, different weights are given to different features according to the importance or credibility of the different features in the matched local features, a new fused image is obtained, and the mathematical expression of the fusion method based on the weighted average method is as follows:
wherein,for the fused image pixel value, +.>Is the firstbThe picture is +.>The pixel value at which it is located,is the firstbThe picture is +.>The weight value of the position is calculated,bin order to count the number of the marks,nis the number of images to be fused.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311087803.0A CN116844142B (en) | 2023-08-28 | 2023-08-28 | Bridge foundation scouring identification and assessment method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311087803.0A CN116844142B (en) | 2023-08-28 | 2023-08-28 | Bridge foundation scouring identification and assessment method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116844142A CN116844142A (en) | 2023-10-03 |
CN116844142B true CN116844142B (en) | 2023-11-21 |
Family
ID=88174575
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311087803.0A Active CN116844142B (en) | 2023-08-28 | 2023-08-28 | Bridge foundation scouring identification and assessment method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116844142B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104134188A (en) * | 2014-07-29 | 2014-11-05 | 湖南大学 | Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion |
CN104268935A (en) * | 2014-09-18 | 2015-01-07 | 华南理工大学 | Feature-based airborne laser point cloud and image data fusion system and method |
CN104715254A (en) * | 2015-03-17 | 2015-06-17 | 东南大学 | Ordinary object recognizing method based on 2D and 3D SIFT feature fusion |
CN106052604A (en) * | 2016-05-30 | 2016-10-26 | 北京交通大学 | Device for measurement of local scour depth around bridge pier |
CN108009363A (en) * | 2017-12-04 | 2018-05-08 | 中铁二院工程集团有限责任公司 | A kind of mud-rock flow washes away the computational methods of bridge pier |
CN108363062A (en) * | 2018-05-15 | 2018-08-03 | 扬州大学 | A kind of pier subsidence hole detection device |
CN110176020A (en) * | 2019-04-09 | 2019-08-27 | 广东工业大学 | A kind of bird's nest impurity method for sorting merging 2D and 3D rendering |
CN112150520A (en) * | 2020-08-18 | 2020-12-29 | 徐州华讯科技有限公司 | Image registration method based on feature points |
CN114088063A (en) * | 2021-10-19 | 2022-02-25 | 青海省交通工程技术服务中心 | Pier local scour terrain measurement method based on mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113376639B (en) * | 2021-07-19 | 2022-08-05 | 福州大学 | Scanning sonar imaging-based three-dimensional reconstruction method for topography of pier foundation scour area |
-
2023
- 2023-08-28 CN CN202311087803.0A patent/CN116844142B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104134188A (en) * | 2014-07-29 | 2014-11-05 | 湖南大学 | Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion |
CN104268935A (en) * | 2014-09-18 | 2015-01-07 | 华南理工大学 | Feature-based airborne laser point cloud and image data fusion system and method |
CN104715254A (en) * | 2015-03-17 | 2015-06-17 | 东南大学 | Ordinary object recognizing method based on 2D and 3D SIFT feature fusion |
CN106052604A (en) * | 2016-05-30 | 2016-10-26 | 北京交通大学 | Device for measurement of local scour depth around bridge pier |
CN108009363A (en) * | 2017-12-04 | 2018-05-08 | 中铁二院工程集团有限责任公司 | A kind of mud-rock flow washes away the computational methods of bridge pier |
CN108363062A (en) * | 2018-05-15 | 2018-08-03 | 扬州大学 | A kind of pier subsidence hole detection device |
CN110176020A (en) * | 2019-04-09 | 2019-08-27 | 广东工业大学 | A kind of bird's nest impurity method for sorting merging 2D and 3D rendering |
CN112150520A (en) * | 2020-08-18 | 2020-12-29 | 徐州华讯科技有限公司 | Image registration method based on feature points |
CN114088063A (en) * | 2021-10-19 | 2022-02-25 | 青海省交通工程技术服务中心 | Pier local scour terrain measurement method based on mobile terminal |
Non-Patent Citations (6)
Title |
---|
Non-contact measurement method for reconstructing three-dimensional scour depth field based on binocular vision technology in laboratory;Sijia Zhu 等;《Measurement》;1-18 * |
唐堂 等.基于地质雷达对桥梁冲刷检测的应用研究.《铁路与公路》.2022,第42卷(第2期),132-134、138. * |
基于地质雷达和多波束声纳对 桥梁水下基础检测方法研究;唐堂 等;《中国公路学会养护与管理分会第十二届学术年会论文集》;161-166 * |
基于地质雷达对桥梁冲刷检测的应用研究;唐堂 等;《铁路与公路》;第42卷(第2期);132-134、138 * |
基于地质雷达的城市道路病害智能识别***分析与设计;彭伟 等;《四川地质学报》;第48卷(第1期);133-136 * |
多波束测试***在桥梁基础冲刷检测中的应用;王德辉 等;《交通世界》;178-180 * |
Also Published As
Publication number | Publication date |
---|---|
CN116844142A (en) | 2023-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107808378B (en) | Method for detecting potential defects of complex-structure casting based on vertical longitudinal and transverse line profile features | |
CN108872997B (en) | Submarine line detection method based on side-scan sonar data fusion and precision processing | |
CN108229342B (en) | Automatic sea surface ship target detection method | |
CN111667470B (en) | Industrial pipeline flaw detection inner wall detection method based on digital image | |
CN108846402B (en) | Automatic extraction method for terrace field ridges based on multi-source data | |
CN112017223A (en) | Heterologous image registration method based on improved SIFT-Delaunay | |
CN113030244B (en) | Inversion imaging method and system for transmission line tower corrosion defect magnetic flux leakage detection signal | |
CN116152115B (en) | Garbage image denoising processing method based on computer vision | |
CN117710399B (en) | Crack contour extraction method in geological survey based on vision | |
CN109685733B (en) | Lead-zinc flotation foam image space-time combined denoising method based on bubble motion stability analysis | |
CN115797473B (en) | Concrete forming evaluation method for civil engineering | |
WO2021000948A1 (en) | Counterweight weight detection method and system, and acquisition method and system, and crane | |
CN111310771B (en) | Road image extraction method, device and equipment of remote sensing image and storage medium | |
CN115272336A (en) | Metal part defect accurate detection method based on gradient vector | |
Nair et al. | Flood water depth estimation—A survey | |
CN117173590A (en) | Water body abnormality monitoring method based on multisource time sequence remote sensing image | |
CN109948629B (en) | GIS equipment X-ray image fault detection method based on SIFT features | |
CN115018785A (en) | Hoisting steel wire rope tension detection method based on visual vibration frequency identification | |
CN113408519B (en) | Method and system for pointer instrument reading based on template rotation matching | |
CN107808165B (en) | Infrared image matching method based on SUSAN corner detection | |
CN116452613B (en) | Crack contour extraction method in geological survey | |
CN116844142B (en) | Bridge foundation scouring identification and assessment method | |
CN114359149A (en) | Dam bank dangerous case video detection method and system based on real-time image edge enhancement | |
CN114359251A (en) | Automatic identification method for concrete surface damage | |
Adu-Gyamfi et al. | Functional evaluation of pavement condition using a complete vision system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |