CN114373156A - Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm - Google Patents

Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm Download PDF

Info

Publication number
CN114373156A
CN114373156A CN202210048291.6A CN202210048291A CN114373156A CN 114373156 A CN114373156 A CN 114373156A CN 202210048291 A CN202210048291 A CN 202210048291A CN 114373156 A CN114373156 A CN 114373156A
Authority
CN
China
Prior art keywords
flow
image
water level
flow rate
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210048291.6A
Other languages
Chinese (zh)
Inventor
吕国敏
刘昌军
梁辰希
张顺福
马强
乔楠
张启义
姚秋玲
孙涛
姚吉利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Institute of Water Resources and Hydropower Research
Original Assignee
China Institute of Water Resources and Hydropower Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Institute of Water Resources and Hydropower Research filed Critical China Institute of Water Resources and Hydropower Research
Priority to CN202210048291.6A priority Critical patent/CN114373156A/en
Publication of CN114373156A publication Critical patent/CN114373156A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C13/00Surveying specially adapted to open water, e.g. sea, lake, river or canal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Hydrology & Water Resources (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a non-contact intelligent monitoring system for water level, flow rate and flow based on a video image recognition algorithm, which comprises the following specific processes: the video station is connected with a video stream data access module, the video stream data access module displays a water gauge picture and accesses an intelligent water level identification module, the video stream data access module displays a water surface video and accesses an intelligent flow rate identification module, the intelligent flow rate identification module establishes a water level-flow rate-flow coupling calculation model through a water level value, the intelligent flow rate identification module establishes a water level-flow rate-flow coupling calculation model through a flow rate value, and the flow value calculated by the water level-flow rate-flow coupling calculation model is displayed on a front-end display. The invention has the beneficial effects that: the method aims to develop a set of coupling calculation model, analyze the flow velocity, water level and flow of the river channel in real time by utilizing video image data returned by a camera, construct a water level flow monitoring system and analyze and forecast the current flood risk of the drainage basin in real time.

Description

Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm
Technical Field
The invention relates to the field of artificial intelligence and video image recognition, in particular to a non-contact intelligent water level, flow rate and flow monitoring system based on a video image recognition algorithm.
Background
The invention aims to develop an intelligent non-contact water level-flow rate-flow coupling calculation model based on video images, analyze flow rate, water level and flow data of a river channel in real time by utilizing video image data returned by a camera, construct a water level flow monitoring system and analyze and forecast the current watershed flood risk in real time.
Disclosure of Invention
The method comprises the steps that an intelligent water level, flow velocity and flow monitoring module is utilized, water gauge pictures and water surface videos are extracted from video streams of a video camera at a riverway site, data meeting requirements are screened and transmitted to an intelligent water level identification module and an intelligent flow velocity identification module respectively; respectively identifying the current water level and the current flow rate, calculating the flow rate according to a water level-flow rate-flow coupling calculation model, and displaying the water level value, the flow rate value and the flow rate value at the front end to obtain the non-contact water level, flow rate and flow rate integrated intelligent monitoring system based on a video image identification algorithm.
The technical scheme adopted by the invention is as follows: the non-contact intelligent monitoring system for water level, flow rate and flow based on the video image recognition algorithm comprises a data input end, an intelligent water level, a flow rate and flow detection module and a front-end display, wherein the data input end comprises a video station and a video stream data access module, and the intelligent flow detection module comprises an intelligent water level recognition module and an intelligent flow rate recognition module; the method is characterized in that: the video station is connected with a video streaming data access module, the video streaming data access module displays a water gauge picture and accesses an intelligent water level identification module, the video streaming data access module displays a water surface video and accesses an intelligent flow rate identification module, the intelligent water level identification module establishes a water level-flow rate coupling calculation model through a water level value, the intelligent flow rate identification module establishes a water level-flow rate coupling calculation model through a flow rate value, and the flow value calculated by the water level-flow rate coupling calculation model is displayed on a front-end display.
Further, the intelligent water level identification module has an identification process method comprising:
firstly, preprocessing an image;
graying the obtained color water gauge image by using a weighted average algorithm; according to importance and other indexes, carrying out weighted average on RGB three-channel values by different weights; because human eyes have highest sensitivity to green and lowest sensitivity to blue, a reasonable gray image can be obtained by carrying out weighted average on RGB three-channel values according to the following formula;
I(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.144×B(i,j) (1)
where I is the output grayscale image matrix, R, G, B represents the three RGB channels of the color image, respectively, and (I, j) is the pixel coordinate of the image matrix.
Secondly, removing the background of the image;
separating the water gauge image and the background image by using a single difference algorithm between columns, and carrying out preprocessing on the obtained gray level image matrix Im×n(m is the number of rows and n is the number of columns of I) Im×nIs accumulated and averaged to obtain a new matrix PnNamely:
Figure BDA0003472654240000021
will matrix PnPerforming single difference between columns to obtain matrix P \uafter single differencenWherein:
P_(0)=0,
P_(i)=|P(i)-P(i-1)|,i∈(1,n-1) (3)
selecting the first 8 large values, and storing the corresponding subscripts, the calculated single difference values between the columns, and the value obtained by subtracting the maximum subscript difference from the sum of the subscripts of the 8 large values; processing the corresponding single difference value by taking the square reciprocal of the subtraction value of the maximum subscript difference and the sum of the subscript mutual differences between the 8 large values as a weight; i.e. in the matrix select8×3In (d), select (i,0) represents the subscript of the (i +1) th large single difference value; select (i,1) represents the large single difference value of (i + 1); select (i,2) represents the difference between the sum of the large single difference value of (i +1) th and the other single difference indices and the maximum index difference;
weight matrix P8The method comprises the following steps:
Figure BDA0003472654240000022
it can be known that the weight value corresponding to the part aggregated in the 8 large values is larger than that of the other parts, so that the single difference value of the clustering person is reduced more slowly, and the single difference value of the non-clustering part is reduced more quickly; the gathering area of the single difference position is the area where the leveling rod is located, which is caused by the special ruler surface character distribution of the leveling rod; therefore, the non-clustered single difference value can be regarded as interference information, and thus the single difference value which continuously weakens the interference is:
select(i,1)=select(i,1)×P(i) (5)
setting a certain threshold, extracting the maximum value and the minimum value of the subscript corresponding to the clustering single difference value exceeding the threshold, and obtaining a subscript range, wherein the range boundary is the segmentation boundary of the leveling rod; finally realizing background separation of the leveling rod according to the boundary;
thirdly, distinguishing red and black of a water gauge;
the method comprises the steps that red and black surfaces of a leveling rod are distinguished by using a single difference algorithm among channels, binaryzation processing of the leveling rod is realized by using an Otsu algorithm which is divided into two channels, one channel is used as a reference, and the two channels are different from the other two channels, namely delta RG-R-G and delta RB-R-B, ideally, when the leveling rod is a black surface, the average value of most of delta RG and delta RB is close to 0, and when the leveling rod is a red surface, the average value of most of delta RG and delta RB is close to 255; the black-and-white image is a binary gray image, and two gray values of the black-and-white image are 0 and 255 respectively; binaryzation of the gray scale image of the leveling rod is realized by utilizing the subareas Otsu, so that interference of shadow in the image on matching can be eliminated, and character information of the scale surface is highlighted;
fourthly, matching water gauges;
carrying out image matching on the leveling rod image obtained through the processing and the manufactured binaryzation digital template leveling rod by utilizing a pyramid matching algorithm to obtain an offset coefficient, and further obtaining a three-wire reading according to the corresponding relation between the position and the value on the template leveling rod, so that the automatic identification of the leveling rod reading is realized;
and fifthly, reading by a water gauge.
Further, the intelligent flow rate identification module, the identification process method of the identification module is:
firstly, acquiring a video image;
the camera sequentially collects water surface videos and water gauge images, the frame rate of the water surface videos is set to be 25 frames, and the water surface videos are collected for 8 seconds and used for identifying the flow velocity of the water surface; acquiring a frame of water gauge images to identify water level;
secondly, calibrating a camera;
erecting a camera on a bank at the river side to shoot a river surface water surface video, and laying 6 marking points on two banks; and finding the relation between the physical coordinates and the image coordinates through a pinhole imaging model:
a relationship is established between the pixel coordinates (X, Y) and the physical coordinates (X, Y, Z) using the following collinearity equations:
Figure BDA0003472654240000031
Figure BDA0003472654240000032
wherein c is the focal length, i is 1. ltoreq. i.ltoreq.3 and j is 1. ltoreq. j.ltoreq.3ijIs the rotation coefficient of the camera, (X)0,Y0,Z0) Is the physical coordinate of the camera, (x)0,y0) Is the center of the screen image; these coefficients are determined by a camera calibration procedure using six ground control points, with the emphasis on determining the length of the STI horizontal axis, which depends on the conversion accuracy of the above-mentioned geometric corrections;
thirdly, correcting the image geometry;
carrying out optical distortion correction on the image by using a direct linear transformation method of aberration correction by using control points distributed on site;
fourthly, synthesizing a space-time image;
synthesizing original space-time images by using detection lines, continuously acquiring M original gray level images at a time interval delta T by using a single camera, setting the detection lines on each image, setting the width of each detection line to be 1 pixel, setting the broadband to be N pixels, setting the length M of an image sequence as a vertical coordinate, setting the length N of a speed measurement line as a horizontal coordinate, and establishing a space-time image STI with the size of M multiplied by N pixels, wherein the horizontal coordinate of the established space-time image is the pixel length N of the speed measurement line, and the vertical coordinate is the acquisition time T of the M images;
fifthly, preprocessing a spatio-temporal image;
and carrying out histogram equalization on the space-time image to increase the image contrast. Further, multi-level edge detection is carried out on the spatio-temporal image after the histogram equalization operation by using a Canny operator, and texture edges of the spatio-temporal image are highlighted. 1) The gradient size and gradient direction of each point are calculated by using a sobel operator, the default convolution kernel size of the sobel operator is 3, the size of the convolution kernel of the sobel operator is adjusted to 5, more details can be obtained, 2) non-maximum value suppression (only maximum retention) is used, stray effects caused by edge detection are eliminated, and 3) double thresholds are applied to determine real and potential edges, the minimum threshold is set to 90, and the maximum threshold is set to 170.
The Sobel operator transverse and longitudinal convolution kernels are:
Figure BDA0003472654240000041
Figure BDA0003472654240000042
sixthly, calculating the average direction angle of the space-time image;
generating an inclined pattern in each spatio-temporal image STI, which shows that the surface features translate along the detection line at an almost constant speed, although the water ripple motion generates some other noise in a direction different from the main flow direction, and finally obtaining the average direction angle phi of the STI by dividing the spatio-temporal image STI into 12 small graphs and calculating the local direction angle of each small graph;
seventhly, calculating the average flow velocity of the river surface;
the average flow velocity along the velocity measurement line direction is calculated by the formula:
Figure BDA0003472654240000051
wherein S isxIs the unit length scale (meter/pixel), S, on the speed linetIs the unit time scale of the time axis and phi is the average azimuth angle.
Further, the water level-flow velocity-flow coupling calculation model comprises the following three parts;
the first part is used for measuring the section of the river channel;
finding out the lowest point (starting point distance) of the section according to the elevation data of the measuring points of the sectionIs BzminThe river bed elevation is Zmin) And the point is taken as the river center.
Distance between left bank and center of river
LLeft side of=Bzmin-BLeft side of (9)
Distance between right bank and center of river
LRight side=BRight side-Bzmin (10)
After the dividing distance b of the section vertical line is input, equally dividing the section from the river center to the left bank/right bank;
the second part is that the average flow velocity of the vertical line is obtained according to the surface flow velocity of the river channel;
the method specifically comprises the following steps: the natural river vertical flow velocity distribution is represented by a quadratic parabola:
Figure BDA0003472654240000052
in the formula of U0The maximum flow velocity is the vertical line; u shapeξThe time-average flow rate is the position on the vertical line and xi from the bottom of the river bed; h is the water depth at the vertical line;
the third part is to calculate the section flow according to the average flow velocity of the vertical line, and specifically comprises the following steps: the flow Q of the water passing section is the integral of the unit area calculated along the cross section by the average flow velocity V of the vertical line:
Figure BDA0003472654240000061
discretizing the above integrals:
Figure BDA0003472654240000062
in the formula, AjIs the jth cell area; u. ofjIs the jth cell average flow rate;
the flow of the cross section water cross section is the sum of the flow of n +1 calculation units; and each fraction is the product of the average velocity of the vertical line and the area of the calculation unit.
The invention has the beneficial effects that: the intelligent non-contact water level-flow velocity-flow rate coupling calculation model based on the video images is developed, the flow velocity, the water level and the flow of a river channel are analyzed in real time by utilizing video image data returned by a camera, a water level flow monitoring system is constructed, and the current flood risk of a drainage basin is analyzed and forecasted in real time.
Drawings
Fig. 1 is a flow chart of an intelligent water level-flow rate-flow analysis model of the invention.
Fig. 2 is a flowchart of the intelligent water level recognition module of the present invention.
Fig. 3 is a flow chart of image background removal of the intelligent water level recognition module of the present invention.
Fig. 4 is a schematic view illustrating the water gauge red and black recognition of the intelligent water level recognition module according to the present invention.
Fig. 5 is a schematic diagram of image matching and obtaining of the intelligent water level identification module of the present invention.
FIG. 6 is a basic diagram of spatiotemporal image velocimetry in the intelligent flow rate identification module of the present invention.
FIG. 7 is a basic flow chart of spatiotemporal image velocimetry in the intelligent flow rate identification module of the present invention.
Fig. 8 is a schematic view of a river flow measurement layout in the intelligent flow rate identification module according to the present invention.
FIG. 9 is a schematic diagram of spatiotemporal image synthesis in the intelligent flow rate identification module of the present invention.
FIG. 10 is a schematic diagram of the mean direction angle of the computed spatiotemporal image in the intelligent flow rate identification module of the present invention.
Fig. 11 is a schematic sectional view of the intelligent flow rate detection module according to the present invention, taken from the center of the river to the left bank/right bank.
Fig. 12 is a schematic view of actually measured vertical flow velocity distribution in the intelligent flow rate detection module according to the present invention.
Detailed Description
As shown in fig. 1, the working and implementing process of the present invention is as follows, the non-contact intelligent monitoring system for water level, flow rate and flow rate based on video image recognition algorithm comprises a data input end, an intelligent flow detection module and a front-end display, wherein the data input end comprises a video station and a video stream data access module, and the intelligent flow detection module comprises an intelligent water level recognition module and an intelligent flow rate recognition module; the method is characterized in that: the video station is connected with a video stream data access module, the video stream data access module displays a water gauge picture and accesses an intelligent water level identification module, the video stream data access module displays a water surface video and accesses an intelligent flow rate identification module, the intelligent flow rate identification module establishes a water level-flow rate-flow coupling calculation model through a water level value, the intelligent flow rate identification module establishes a water level-flow rate-flow coupling calculation model through a flow rate value, and the flow value calculated by the water level-flow rate-flow coupling calculation model is displayed on a front-end display.
As shown in FIG. 2, the present invention is directed to developing a set of intelligent water level recognition modules based on artificial intelligence methods and computer vision algorithms. The intelligent identification module is deployed on a video rear-end platform, is accessed into a real-time video, and realizes real-time identification of the water level, and the identification flow method of the intelligent water level identification module comprises the following steps:
firstly, preprocessing an image;
graying the obtained color water gauge image by using a weighted average algorithm; according to importance and other indexes, carrying out weighted average on RGB three-channel values by different weights; because human eyes have highest sensitivity to green and lowest sensitivity to blue, a reasonable gray image can be obtained by carrying out weighted average on RGB three-channel values according to the following formula;
I(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.144×B(i,j) (1)
where I is the output grayscale image matrix, R, G, B represents the three RGB channels of the color image, respectively, and (I, j) is the pixel coordinate of the image matrix.
Secondly, removing the background of the image;
as shown in fig. 3, the water gauge image and the background image are separated by using the single difference algorithm between columns, and the gray level image matrix I obtained by preprocessing is usedm×n(m is the number of rows and n is the number of columns of I) Im×nEach column of (A) is accumulated and averaged to obtain a new valueMatrix P ofnNamely:
Figure BDA0003472654240000071
will matrix PnPerforming single difference between columns to obtain matrix P \uafter single differencenWherein:
P_(0)=0,
P_(i)=|P(i)-P(i-1)|,i∈(1,n-1) (3)
selecting the first 8 large values, and storing the corresponding subscripts, the calculated single difference values between the columns, and the value obtained by subtracting the maximum subscript difference from the sum of the subscripts of the 8 large values; processing the corresponding single difference value by taking the square reciprocal of the subtraction value of the maximum subscript difference and the sum of the subscript mutual differences between the 8 large values as a weight; i.e. in the matrix select8×3In (d), select (i,0) represents the subscript of the (i +1) th large single difference value; select (i,1) represents the large single difference value of (i + 1); select (i,2) represents the difference between the sum of the large single difference value of (i +1) th and the other single difference indices and the maximum index difference;
weight matrix P8The method comprises the following steps:
Figure BDA0003472654240000081
it can be known that the weight value corresponding to the part aggregated in the 8 large values is larger than that of the other parts, so that the single difference value of the clustering person is reduced more slowly, and the single difference value of the non-clustering part is reduced more quickly; the gathering area of the single difference position is the area where the leveling rod is located, which is caused by the special ruler surface character distribution of the leveling rod; therefore, the non-clustered single difference value can be regarded as interference information, and thus the single difference value which continuously weakens the interference is:
select(i,1)=select(i,1)×P(i) (5)
setting a certain threshold, extracting the maximum value and the minimum value of the subscript corresponding to the clustering single difference value exceeding the threshold, and obtaining a subscript range, wherein the range boundary is the segmentation boundary of the leveling rod; finally realizing background separation of the leveling rod according to the boundary; FIG. (a) shows a single difference between columns for each column of the image and is plotted as a vertical line graph; plot (b) shows the top 8 large values of the single difference between the columns in plot (a); the graph (c) shows that after the single differences among the first 8 large columns in the graph (b) are subjected to clustering processing, the reduction degree of the single differences among the columns which are clustered together is far smaller than the change of non-clustered single differences. Fig. 2 shows that the boundary of the leveling rod in the image column direction is found by setting the threshold value on the basis of the graph (c).
Thirdly, distinguishing red and black of a water gauge;
as shown in fig. 4, the level red black surface is distinguished by using the inter-channel single difference algorithm, the division Otsu algorithm implements the level binarization processing, and one channel is used as a reference and is differentiated from the other two channels, i.e. Δ RG ═ R-G and Δ RB ═ R-B, ideally, when the channel is a black surface, the average value of most of Δ RG and Δ RB is close to 0, and when the channel is a red surface, the average value of most of Δ RG and Δ RB is close to 255; the black-and-white image is a binary gray image, and two gray values of the black-and-white image are 0 and 255 respectively; binaryzation of the gray scale image of the leveling rod is realized by utilizing the subareas Otsu, so that interference of shadow in the image on matching can be eliminated, and character information of the scale surface is highlighted;
fourthly, matching water gauges;
the matching principle is as shown in fig. 5, the pyramid matching algorithm is utilized to perform image matching on the leveling rod image obtained through the processing and the manufactured binary digital template leveling rod to obtain an offset coefficient, and then three-wire reading is obtained according to the corresponding relation between the position and the value on the template leveling rod, so that the automatic identification of the leveling rod reading is realized;
and fifthly, reading by a water gauge.
The principle and the flow of the intelligent flow velocity identification module are shown in fig. 6 and 7, the invention adopts a space-time image velocimetry (STIV) to realize the flow velocity measurement of the river channel, the brightness distribution of the river surface image is consistent with the flow velocity of the river surface, and the brightness or gray color on the water surface is changed along with the flow of the water surface. The flow velocity can be calculated by analyzing the gradient (distance/time) of the fringe image on the spatiotemporal image by using the spatiotemporal image (STI) in which the change of brightness in the detection line set parallel to the water flow direction changes with time. The intelligent flow velocity identification module comprises an identification flow process method which comprises the following steps:
firstly, acquiring a video image;
secondly, calibrating a camera;
as shown in fig. 8, 6 marking points are laid on both banks according to a river surface video shot by erecting a camera on a bank beside a river; and finding the relation between the physical coordinates and the image coordinates through a pinhole imaging model:
a relationship is established between the pixel coordinates (X, Y) and the physical coordinates (X, Y, Z) using the following collinearity equations:
Figure BDA0003472654240000091
Figure BDA0003472654240000092
wherein c is the focal length, i is 1. ltoreq. i.ltoreq.3 and j is 1. ltoreq. j.ltoreq.3ijIs the rotation coefficient of the camera, (X)0,Y0,Z0) Is the physical coordinate of the camera, (x)0,y0) Is the center of the screen image; these coefficients are determined by a camera calibration procedure using six ground control points, with the emphasis on determining the length of the STI horizontal axis, which depends on the conversion accuracy of the above-mentioned geometric corrections;
thirdly, correcting the image geometry;
carrying out optical distortion correction on the image by using a direct linear transformation method of aberration correction by using control points distributed on site;
fourthly, synthesizing a space-time image;
as shown in FIG. 9, the original spatiotemporal images are synthesized from detection lines. Continuously acquiring M original gray level images at a time interval delta T by adopting a single camera, setting a detection line on each image, setting the width of the detection line to be 1 pixel and the broadband to be N pixels, establishing a spatiotemporal image STI with the size of M multiplied by N pixels by taking the length M of an image sequence as a vertical coordinate and the length N of a speed measurement line as a horizontal coordinate, and setting the established spatiotemporal image horizontal coordinate to be the pixel length N of the speed measurement line and the vertical coordinate to be the acquisition time T of the M images;
fifthly, preprocessing a spatio-temporal image;
and carrying out histogram equalization on the space-time image to increase the image contrast. Further, multi-level edge detection is carried out on the spatio-temporal image after the histogram equalization operation by using a Canny operator, and texture edges of the spatio-temporal image are highlighted. 1) The gradient size and gradient direction of each point are calculated by using a sobel operator, the default convolution kernel size of the sobel operator is 3, the size of the convolution kernel of the sobel operator is adjusted to 5, more details can be obtained, 2) non-maximum value suppression (only maximum retention) is used, stray effects caused by edge detection are eliminated, and 3) double thresholds are applied to determine real and potential edges, the minimum threshold is set to 90, and the maximum threshold is set to 170.
The Sobel operator transverse and longitudinal convolution kernels are:
Figure BDA0003472654240000101
Figure BDA0003472654240000102
sixthly, calculating the average direction angle of the space-time image;
generating an inclined pattern in each spatio-temporal image STI, which shows that the surface features translate along the detection line at an almost constant speed, although the water ripple motion generates some other noise in a direction different from the main flow direction, and finally obtaining the average direction angle phi of the STI by dividing the spatio-temporal image STI into 12 small graphs and calculating the local direction angle of each small graph;
seventhly, calculating the average flow velocity of the river surface;
the average flow velocity along the velocity measurement line direction is calculated by the formula:
Figure BDA0003472654240000103
wherein S isxIs the unit length scale (meter/pixel), S, on the speed linetIs the unit time scale of the time axis and phi is the average azimuth angle.
The water level-flow velocity-flow coupling calculation model is divided into three parts.
First part for measuring river cross section
As shown in FIG. 11, the lowest point of the cross section (the starting point distance is B) is found out from the elevation data of the cross section measuring pointszminThe river bed elevation is Zmin) And the point is taken as the river center.
Distance between left bank and center of river
LLeft side of=Bzmin-BLeft side of (9)
Distance between right bank and center of river
LRight side=BRight side-Bzmin (10)
After the dividing distance b of the section vertical line is input, equally dividing the section from the river center to the left bank/right bank;
the second part is to obtain the average flow velocity of the vertical line according to the surface flow velocity of the river channel
As can be seen from the flow velocity distribution law of fig. 12, the distribution law of the outer region (upper portion) can be obviously represented by a quadratic parabola; the inner zone, particularly near the bed or near the bottom of the tank, can represent the flow velocity distribution in a logarithmic fashion. Meanwhile, the quadratic parabolic formula extends to the inner zone, the error with the logarithmic formula is not too large, the lower zone occupies a small area in the vertical direction, and the influence on the whole vertical flow velocity distribution rule is not large, so that the quadratic parabolic formula is used for expressing the vertical flow velocity distribution of the natural river:
Figure BDA0003472654240000111
in the formula of U0The maximum flow velocity is the vertical line; u shapeξThe time-average flow rate is the position on the vertical line and xi from the bottom of the river bed; h is the water depth at the vertical line;
the third part calculates the section flow according to the average flow velocity of the vertical line,
when the distance between the vertical lines is sufficiently small, the river bottom can be considered to be approximately a straight line from the jth vertical line to the j +1 th vertical line, and the average flow velocity of the vertical lines also linearly changes within this range, so that the distribution of the average flow velocity u along the vertical line of the cross section can be obtained.
The flow Q of the cross section is the integral of the unit area calculated along the cross section of the average flow velocity V of the vertical line:
Figure BDA0003472654240000112
discretizing the above integrals:
Figure BDA0003472654240000121
in the formula, AjIs the jth cell area; u. ofjIs the jth cell average flow rate;
the flow of the cross section water cross section is the sum of the flow of n +1 calculation units; and each fraction is the product of the average velocity of the vertical line and the area of the calculation unit.

Claims (2)

1. The non-contact type intelligent water level, flow rate and flow monitoring system based on the video image recognition algorithm comprises a data input end, an intelligent flow detection module and a front-end display, wherein the data input end comprises a video station and a video stream data access module, and the intelligent flow detection module comprises an intelligent water level recognition module and an intelligent flow rate recognition module; the method is characterized in that: the video station is connected with a video streaming data access module, the video streaming data access module displays a water gauge picture and accesses an intelligent water level identification module, the video streaming data access module displays a water surface video and accesses an intelligent flow rate identification module, the intelligent water level identification module establishes a water level-flow rate coupling calculation model through a water level value, the intelligent flow rate identification module establishes a water level-flow rate coupling calculation model through a flow rate value, and the flow value calculated by the water level-flow rate coupling calculation model is displayed on a front-end display;
the intelligent flow velocity identification module comprises an identification flow process method which comprises the following steps:
firstly, acquiring a video image;
the camera sequentially collects water surface videos and water gauge images, the frame rate of the water surface videos is set to be 25 frames, and the water surface videos are collected for 8 seconds and used for identifying the flow velocity of the water surface; acquiring a frame of water gauge images to identify water level;
secondly, calibrating a camera;
erecting a camera on a bank at the river side to shoot a river surface water surface video, and laying 6 marking points on two banks; and finding the relation between the physical coordinates and the image coordinates through a pinhole imaging model:
a relationship is established between the pixel coordinates (X, Y) and the physical coordinates (X, Y, Z) using the following collinearity equations:
Figure FDA0003472654230000011
Figure FDA0003472654230000012
wherein c is the focal length, i is 1. ltoreq. i.ltoreq.3 and j is 1. ltoreq. j.ltoreq.3ijIs the rotation coefficient of the camera, (X)0,Y0,Z0) Is the physical coordinate of the camera, (x)0,y0) Is the center of the screen image; determining these coefficients by a camera calibration procedure using six ground control points, wherein determining the length of the STI horizontal axis depends on the conversion accuracy of the above-mentioned geometric correction;
thirdly, correcting the image geometry;
carrying out optical distortion correction on the image by using a direct linear transformation method of aberration correction by using control points distributed on site;
fourthly, synthesizing a space-time image;
synthesizing original space-time images by using detection lines, continuously acquiring M original gray level images at a time interval delta T by using a single camera, setting the detection lines on each image, setting the width of each detection line to be 1 pixel, setting the broadband to be N pixels, setting the length M of an image sequence as a vertical coordinate, setting the length N of a speed measurement line as a horizontal coordinate, and establishing a space-time image STI with the size of M multiplied by N pixels, wherein the horizontal coordinate of the established space-time image is the pixel length N of the speed measurement line, and the vertical coordinate is the acquisition time T of the M images;
fifthly, preprocessing a spatio-temporal image;
carrying out histogram equalization on the spatio-temporal image to increase the image contrast; specifically, multi-level edge detection is carried out on the spatio-temporal image after histogram equalization operation by using a Canny operator, and texture edges of the spatio-temporal image are highlighted;
1) calculating the gradient size and gradient direction of each point by using a sobel operator, wherein the default convolution kernel size of the sobel operator is 3, and the size of the convolution kernel of the sobel operator is adjusted to 5, so that more details can be obtained;
2) using non-maximum value to suppress, eliminating stray effect caused by edge detection;
3) applying a double threshold to determine true and potential edges, the present invention sets the minimum threshold to 90 and the maximum threshold to 170;
the Sobel operator transverse and longitudinal convolution kernels are:
Figure FDA0003472654230000021
Figure FDA0003472654230000022
sixthly, calculating the average direction angle of the space-time image;
generating an inclined pattern in each spatio-temporal image STI, which shows that the surface features translate along the detection line at an almost constant speed, although the water ripple motion generates some other noise in a direction different from the main flow direction, and finally obtaining the average direction angle phi of the STI by dividing the spatio-temporal image STI into 12 small graphs and calculating the local direction angle of each small graph;
seventhly, calculating the average flow velocity of the river surface;
the average flow velocity along the velocity measurement line direction is calculated by the formula:
Figure FDA0003472654230000031
wherein S isxIs the unit length scale (meter/pixel), S, on the speed linetIs the unit time scale of the time axis and phi is the average azimuth angle.
2. The non-contact intelligent monitoring system for water level, flow rate and flow based on the video image recognition algorithm according to claim 1, characterized in that: the water level-flow velocity-flow coupling calculation model comprises the following parts;
the first part is used for measuring the section of the river channel;
finding out the lowest point of the section with a starting point distance of B according to the elevation data of the measuring points of the sectionzminThe river bed elevation is ZminTaking the point as the river center;
distance between left bank and center of river
LLeft side of=Bzmin-BLeft side of (9)
Distance between right bank and center of river
LRight side=BRight side-Bzmin (10)
After the dividing distance b of the section vertical line is input, equally dividing the section from the river center to the left bank/right bank;
the second part is that the average flow velocity of the vertical line is obtained according to the surface flow velocity of the river channel;
the method specifically comprises the following steps: the natural river vertical flow velocity distribution is represented by a quadratic parabola:
Figure FDA0003472654230000032
in the formula of U0The maximum flow velocity is the vertical line; u shapeξThe time-average flow rate is the position on the vertical line and xi from the bottom of the river bed; h is the water depth at the vertical line;
the third part is to calculate the section flow according to the average flow velocity of the vertical line, and specifically comprises the following steps: the flow Q of the water passing section is the integral of the unit area calculated along the cross section by the average flow velocity V of the vertical line:
Figure FDA0003472654230000033
discretizing the above integrals:
Figure FDA0003472654230000034
in the formula, AjIs the jth cell area; u. ofjIs the jth cell average flow rate;
the flow of the cross section water cross section is the sum of the flow of n +1 calculation units; and each fraction is the product of the average velocity of the vertical line and the area of the calculation unit.
CN202210048291.6A 2022-01-17 2022-01-17 Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm Withdrawn CN114373156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210048291.6A CN114373156A (en) 2022-01-17 2022-01-17 Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210048291.6A CN114373156A (en) 2022-01-17 2022-01-17 Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm

Publications (1)

Publication Number Publication Date
CN114373156A true CN114373156A (en) 2022-04-19

Family

ID=81143874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210048291.6A Withdrawn CN114373156A (en) 2022-01-17 2022-01-17 Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm

Country Status (1)

Country Link
CN (1) CN114373156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116136428A (en) * 2023-04-20 2023-05-19 中国铁塔股份有限公司湖北省分公司 River water level measuring system, method and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116136428A (en) * 2023-04-20 2023-05-19 中国铁塔股份有限公司湖北省分公司 River water level measuring system, method and readable storage medium
CN116136428B (en) * 2023-04-20 2023-08-08 中国铁塔股份有限公司湖北省分公司 River water level measuring system, method and readable storage medium

Similar Documents

Publication Publication Date Title
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN110175576B (en) Driving vehicle visual detection method combining laser point cloud data
CN105354865B (en) The automatic cloud detection method of optic of multispectral remote sensing satellite image and system
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN103198467B (en) Image processing apparatus and image processing method
CN109740485B (en) Reservoir or small reservoir identification method based on spectral analysis and deep convolutional neural network
CN113435282B (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN112560619B (en) Multi-focus image fusion-based multi-distance bird accurate identification method
CN107424142A (en) A kind of weld joint recognition method based on saliency detection
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN113240626A (en) Neural network-based method for detecting and classifying concave-convex flaws of glass cover plate
CN109711268B (en) Face image screening method and device
CN108596975A (en) A kind of Stereo Matching Algorithm for weak texture region
CN104952070B (en) A kind of corn field remote sensing image segmentation method of class rectangle guiding
AU2020100044A4 (en) Method of tracking of Surgical Target and Tool
US20220128358A1 (en) Smart Sensor Based System and Method for Automatic Measurement of Water Level and Water Flow Velocity and Prediction
CN116721391B (en) Method for detecting separation effect of raw oil based on computer vision
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN113033315A (en) Rare earth mining high-resolution image identification and positioning method
CN111199245A (en) Rape pest identification method
CN111665199A (en) Wire and cable color detection and identification method based on machine vision
CN114373156A (en) Non-contact type water level, flow velocity and flow intelligent monitoring system based on video image recognition algorithm
CN109815784A (en) A kind of intelligent method for classifying based on thermal infrared imager, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220419

WW01 Invention patent application withdrawn after publication