CN117152463A - Debris flow faucet monitoring method based on video analysis - Google Patents

Debris flow faucet monitoring method based on video analysis Download PDF

Info

Publication number
CN117152463A
CN117152463A CN202311012005.1A CN202311012005A CN117152463A CN 117152463 A CN117152463 A CN 117152463A CN 202311012005 A CN202311012005 A CN 202311012005A CN 117152463 A CN117152463 A CN 117152463A
Authority
CN
China
Prior art keywords
debris flow
flow
area
frame
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311012005.1A
Other languages
Chinese (zh)
Inventor
张钊
崔醒龙
李睿
李俊峄
田壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhiyang Innovation Technology Co Ltd
Original Assignee
Zhiyang Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiyang Innovation Technology Co Ltd filed Critical Zhiyang Innovation Technology Co Ltd
Priority to CN202311012005.1A priority Critical patent/CN117152463A/en
Publication of CN117152463A publication Critical patent/CN117152463A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a debris flow faucet monitoring method based on video analysis, which belongs to the technical field of environmental monitoring and environmental accident emergency treatment, and comprises the following steps: acquiring a valley scene video shot by monitoring equipment in real time, and decoding to obtain a video frame set; calculating the size of the image and mapping preset coordinates; performing inverse perspective matrix calculation on the coordinates to obtain perspective coordinates; and calculating an actual movement region of the debris flow through image fusion, dividing the actual movement region, and judging whether the debris flow is a tap according to the change of the optical flow area of the ROI which is preset. The method solves the problem of misidentification and misjudgment when the flow rate of open channel water flow in the gully is low due to the lack of monitoring the debris flow faucet in the traditional video online monitoring method; meanwhile, the debris flow speed measuring method is improved, and the problem that the intelligent debris flow inspection scene acquires faucet information is solved.

Description

Debris flow faucet monitoring method based on video analysis
Technical Field
The application relates to the technical field of environmental monitoring and environmental accident emergency treatment, in particular to a debris flow faucet monitoring method based on video analysis.
Background
The mountain area of mountains, plateaus, hills and the like in China is 69% of the land area, and a large number of geological disasters such as collapse, landslide, debris flow and the like can be generated each year. With the technical progress, intelligent operation and maintenance based on video analysis gradually becomes one of the debris flow disaster discovery and early warning methods.
The main stream monitoring method based on the monitoring video at the present stage is to perform frame extraction and target detection on the real-time video through an integrated artificial intelligent model, obtain local characteristics of the debris flow through a deep learning technology, identify the debris flow, extract point location information to form a Space-time diagram, analyze the Space-time diagram according to an STIV (Space-Time Image Velocimetry) algorithm, and calculate the flow velocity of the debris flow.
However, because a large amount of debris flow data is required for model training based on deep learning, and the debris flow is used as a mixture of mud, sand and rainwater, the feature points are too many and the data is sparse, so that the false alarm rate and false alarm rate of algorithm model identification are very high, and the rapid promotion cannot be achieved in a short time. Meanwhile, the debris flow speed measuring function cannot calculate the accurate position of the pixel point of the debris flow due to the fact that the identification false alarm is high, meanwhile, open channel water flow exists in the test point area, the acquisition of the first wave tap information of the debris flow is not considered in speed calculation, and accordingly the data error of on-site calculation is large.
Disclosure of Invention
The application aims to solve the technical problem of providing a debris flow faucet monitoring method based on video analysis, which is used for solving the problem of acquiring faucet information in an intelligent debris flow inspection scene.
In order to solve the technical problems, the application provides the following technical scheme:
a debris flow faucet monitoring method based on video analysis comprises the following steps:
step 1: acquiring a valley scene video shot by monitoring equipment in real time, and decoding to obtain a video frame set;
step 2: calculating the size of the image and mapping preset coordinates;
step 3: performing inverse perspective matrix calculation on the coordinates to obtain perspective coordinates;
step 4: and calculating an actual movement region of the debris flow through image fusion, dividing the actual movement region, and judging whether the debris flow is a tap according to the change of the optical flow area of the ROI which is preset.
The application has the following beneficial effects:
the method solves the problem of misidentification and misjudgment when the flow rate of open channel water flow in the gully is low due to the lack of monitoring the debris flow faucet in the traditional video online monitoring method; meanwhile, under the condition of not depending on a large number of training samples and a deep learning model, optical flow data is filtered through background modeling and a Kalman filtering algorithm, so that a debris flow speed measuring method is improved, and the problem of acquiring faucet information in an intelligent debris flow inspection scene is solved.
Drawings
FIG. 1 is a flow chart of a debris flow faucet monitoring method based on video analysis according to the present application;
FIG. 2 is a schematic diagram of Kalman filtering before and after filtering, wherein (a) is a schematic diagram before filtering and (b) is a schematic diagram after filtering;
FIG. 3 is a schematic view of the monitoring effect of the debris flow faucet of the present application;
FIG. 4 is a schematic view of the debris flow faucet versus water flow area variation in accordance with the present application;
FIG. 5 is a graph showing the average speed results of the debris flow faucet according to the present application.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The application provides a debris flow faucet monitoring method based on video analysis, which is shown in fig. 1 and comprises the following steps:
step 1: acquiring a (debris flow) gully scene video shot by monitoring equipment in real time, and decoding to obtain a video frame set;
in general, the camera can be installed in the area near the debris flow gully by adopting an inclined method, the lens angle generally requires to be capable of seeing the gully bottom, meanwhile, the camera angle is optimal from 60 degrees to 90 degrees relative to the gully, the camera angle can be perfectly calculated by inverse perspective matrix transformation through a certain angle difference, and otherwise, the error can be increased. Meanwhile, due to the existence of perspective effect, things which are originally parallel in the actual scene are intersected in the image acquired by the camera, so that the accuracy of vision measurement is affected.
In the implementation, the system can receive the test video shot and transmitted by the mountain area test point monitoring equipment (such as a monitoring camera) in real time, decode the video, extract frames and obtain an image frame (namely a video frame) set.
Step 2: calculating the size of the image and mapping preset coordinates;
in this step, the pixel size of the image in the image frame set is calculated by a corresponding computer vision technique and the preset coordinates are mapped.
Step 3: performing inverse perspective matrix calculation on the coordinates to obtain perspective coordinates;
in this step, the inverse perspective matrix calculation is performed on the preset coordinates in the image to obtain coordinates after perspective, which means that the transformation matrix is obtained by four groups of mapping point coordinates (four groups of mapping points can be flexibly selected according to the need, for example, can be four vertices of the image) in the original image, so that the original image can be corrected to obtain an image with a overlook angle consistent in wide and high dimensions, and at the same time, the mapping relationship between the actual distance and the pixel distance can be obtained.
Specifically, the monitoring device is generally installed in the target scene area by using an inclination method, and observes the debris flow gully area at a certain angle. Due to the existence of perspective effect, things which are originally parallel in an actual scene are intersected in an image acquired by a camera, so that the accuracy of vision measurement is affected, and an error is reduced by adopting a matrix operation mode.
Step 4: and calculating an actual movement region of the debris flow through image fusion, dividing the actual movement region, and judging whether the debris flow is a tap according to the change of the optical flow area of the ROI which is preset.
In this step, the ROI area (specifically, the rotation rectangle) and the actual width and height of the ROI area may be preset.
In the embodiment of the application, partial images can be acquired by a background frame difference method, optical flow sparse points are calculated on images before and after time, and then an actual motion area of debris flow in a scene is acquired by utilizing an image fusion technology and is segmented; and then filtering the debris flow optical flow noise point in the background frame difference method by a Kalman filtering method, and judging whether the debris flow optical flow noise point is a tap or not according to the change of the optical flow area of the region where the debris flow noise point enters the ROI.
Therefore, in the process of finding the debris flow faucet, the method does not additionally increase a deep learning model to carry out reasoning and does not carry out sample training on the data of the debris flow faucet, so that the method does not need a large number of debris flow pictures or videos to carry out data support before practical application.
Processing the video frame by using a light flow method, wherein the background in the video is basically static when no debris flow occurs; when the debris flow occurs, the area where the debris flow occurs and the image background relatively move, so that the speed vector of the area where the debris flow occurs and the speed vector of the neighborhood background are different. Modeling is carried out on a motion field of the debris flow area through an optical flow method, monitoring of the debris flow generation process is achieved, and the cross section is further divided to measure parameters such as the overcurrent speed, the overcurrent width, the mud bit depth and the like.
In specific implementation, as an optional embodiment, the step 4 may include:
performing optical flow and background frame difference method calculation on each frame of image, fusing and operating mask images of the two frames as images of an actual moving debris flow area, superposing the mask of each frame, reserving the optical flow calculation result of each frame, and performing target parameter detection after all image calculation is completed;
acquiring an optical flow and motion information of the debris flow by a background frame difference method; the calculation process is similar to that of the following:
the calculation formula of the background frame difference method is as follows:
Diff(x,y,t)=|I(x,y,t)-B(x,y)|
where Diff (x, y, t) represents the current frame pixel value at time t and location (x, y), and B (x, y) is the background pixel value at (x, y). If Diff (x, y, t) exceeds a preset threshold, the pixel is marked as foreground.
In the background frame difference method, because image noise and object motion interference can generate some wrong displacement vectors, a Kalman filtering algorithm is preferably used for predicting and updating optical flow data, so that the optical flow data is filtered, and the influence of optical flow noise points is reduced. That is, as another alternative embodiment, the step 4 may further include:
filtering debris flow optical flow noise points generated in a background frame difference method by using a Kalman filtering method;
wherein, assuming that the current time is k, the observed light current value is Z k The algorithm updating step of the kalman filter may be as follows:
and (3) predicting: according to the error covariance x of the previous moment k-1 And a state transition matrix F for predicting the state x at the current time k I.e. x k =Fx k-1
Prediction error covariance: according to the error covariance P of the previous moment k-1 And system noise covariance Q, calculating prediction error covariance P at the current moment k I.e. P k =FP k-1 F T +Q;
Updating gain: according to the prediction error covariance P of the current moment k-1 The observed noise covariance R and the observed matrix H, and an update gain K is calculated k I.e. K k =P k H T (HP k H T +R) -1
Updating the state: according toPrediction state X at the present moment k Observed value Z k Updating gain K k Calculate the state X at the current time k Is the optimal estimate x of (2) k ' i.e. x k ′=x k +K k (z k -Hx k );
Updating the error covariance: according to the prediction error covariance P of the current moment k Updating gain K k Calculating the error covariance P of the current moment k Is the most estimated value P of (2) k I.e. P k =(I-HK k )P k
Thus, through the updating step, the smooth processing can be performed on the debris flow optical flow data, and the influence of the optical flow noise point is reduced, so that the monitoring precision of the background frame difference method is improved (see fig. 2).
As yet another alternative embodiment, for accurate faucet determination, the step 4 may include:
and if the area of the light flow of the debris flow area, which enters the preset ROI area, suddenly increases and remains constant along with the change of time, judging as a tap.
Further, the step 4 may further include:
tap detection is achieved by monitoring the change of the area of the optical flow of the debris flow area in the ROI area; the optical flow area of the ROI is obtained by accumulating the optical flow speed of all pixels in the ROI, and the value is used for measuring the object motion intensity and speed in the ROI;
the calculation formula of the optical flow area:
where the ROI is the region of interest, (u, v) is the pixel coordinates within the region,and->Representing the horizontal and vertical components of the optical flow in the x and y directions, respectively, < >>Representing the magnitude of the optical flow velocity at pixel (u, v).
As a further alternative embodiment, the step 4 may further include:
step 5: and determining a velocity measurement normal according to the segmented section information, and calculating the tap speed of the debris flow by combining the pixel moving distance and time of the adjacent frames.
In this step, since the movement direction of the optical flow is not uniform, a uniform velocity measurement normal is required to be set for calculating the flow velocity. The normal line of the connecting line of the transverse section is the velocity measurement normal line, the distance of a preset number of pixels is respectively taken forward and backward according to the velocity measurement normal line direction as the ROI for velocity calculation, and the optical flow area of the region is calculated. The preset number of pixels can be flexibly designed according to the needs, for example, the number of pixels can be 10, 15, 20 and the like.
In the step, during implementation, a velocity measurement normal line can be set at a section, after a motion field of a debris flow area is obtained through calculation, sampling is carried out on the velocity measurement normal line, so that the instantaneous displacement of a previous frame and a current frame at the velocity measurement normal line is obtained, and the actual speed is obtained through calculation according to the mapping relation between the frame rate of a video and image pixels, the actual distance and the pixel distance and the jump frame number and by combining the optical flow area. Thus, the peak speed of the debris flow faucet can be conveniently calculated by combining the pixel moving distance and the pixel moving time of the adjacent frames.
As a further alternative embodiment, the step 5 may further include:
step 6: and storing the calculated instantaneous speed of each frame, drawing an ROI area change curve chart after the whole video analysis is completed, and finally, calculating the average flow speed when the debris flow tap occurs by encoding the result frames to synthesize the result video.
In the step, the calculated instantaneous speed of each frame is stored, after the analysis of the whole video is completed, an ROI area change curve chart is drawn, and finally, the result video is synthesized by encoding the result frame, so that the average flow speed when the debris flow tap occurs is calculated.
In a specific example, FIG. 3 shows a debris flow faucet monitoring effect, wherein a debris flow faucet pre-warning (warning mudflow tap) is given, the debris flow faucet speed (speed) being 1.8m/s; FIG. 4 shows the flow tap versus flow Area with the abscissa being the Frame Number and the ordinate being the Area (Area), flow representing the flow of debris, water representing the water, diff representing the difference between the two; FIG. 5 shows the average speed results of the debris flow tap generation process, wherein the maximum speed (max_speed) of the debris flow tap is 2.02m/s, the average speed (mean_speed) is 0.36m/s, the width (width) is 0.7m, and the depth (depth) is 1.26cm.
In summary, according to the debris flow faucet monitoring method based on video analysis, firstly, a gully scene video shot by monitoring equipment in real time is obtained and decoded to obtain a video frame set, then the image size is calculated and a preset coordinate is mapped, then the coordinate is calculated by an inverse perspective matrix, the coordinate after perspective is obtained, finally, the actual movement area of the debris flow is calculated by image fusion and divided, and whether the debris flow is a faucet is judged according to the change of the optical flow area of the ROI area which is preset. Therefore, the problems of misidentification and misjudgment when the flow rate of open channel water flow in the gully is low due to the lack of monitoring the debris flow faucet in the traditional video online monitoring method are solved; meanwhile, under the condition of not depending on a large number of training samples and a deep learning model, optical flow data is filtered through background modeling and a Kalman filtering algorithm, so that a debris flow speed measuring method is improved, and the problem of acquiring faucet information in an intelligent debris flow inspection scene is solved.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (9)

1. The debris flow faucet monitoring method based on video analysis is characterized by comprising the following steps of:
step 1: acquiring a valley scene video shot by monitoring equipment in real time, and decoding to obtain a video frame set;
step 2: calculating the size of the image and mapping preset coordinates;
step 3: performing inverse perspective matrix calculation on the coordinates to obtain perspective coordinates;
step 4: and calculating an actual movement region of the debris flow through image fusion, dividing the actual movement region, and judging whether the debris flow is a tap according to the change of the optical flow area of the ROI which is preset.
2. The method according to claim 1, wherein in the step 1, the monitoring device is installed in a region near a debris flow gully using an inclination method, and/or the monitoring device has a depression angle of 60-90 degrees.
3. The method according to claim 1, wherein the step 4 comprises:
performing optical flow and background frame difference method calculation on each frame of image, fusing and operating mask images of the two frames as images of an actual moving debris flow area, superposing the mask of each frame, reserving the optical flow calculation result of each frame, and performing target parameter detection after all image calculation is completed;
acquiring an optical flow and motion information of the debris flow by a background frame difference method;
the calculation formula of the background frame difference method is as follows:
Diff(x,y,t)=|I(x,y,t)-B(x,y)|
where Diff (x, y, t) represents the current frame pixel value at time t and location (x, y), and B (x, y) is the background pixel value at (x, y).
4. A method according to claim 3, wherein said step 4 comprises:
filtering debris flow optical flow noise points generated in a background frame difference method by using a Kalman filtering method;
wherein, the current moment is assumed to be k, and the view isThe measured light value is Z k The algorithm updating step of the kalman filter is as follows:
and (3) predicting: according to the error covariance x of the previous moment k-1 And a state transition matrix F for predicting the state x at the current time k I.e. x k =Fx k-1
Prediction error covariance: according to the error covariance P of the previous moment k-1 And system noise covariance Q, calculating prediction error covariance P at the current moment k I.e. P k =FP k-1 F T +Q;
Updating gain: according to the prediction error covariance P of the current moment k-1 The observed noise covariance R and the observed matrix H, and an update gain K is calculated k I.e. K k =P k H T (HP k H T +R) -1
Updating the state: according to the prediction state X of the current moment k Observed value Z k Updating gain K k Calculate the state X at the current time k Is the optimal estimate x of (2) k ' i.e. x k ′=x k +K k (z k -Hx k );
Updating the error covariance: according to the prediction error covariance P of the current moment k Updating gain K k Calculating the error covariance P of the current moment k Is the most estimated value P of (2) k I.e. P k =(I-HK k )P k
5. The method according to claim 1, wherein the step 4 comprises:
and if the area of the light flow of the debris flow area, which enters the preset ROI area, suddenly increases and remains constant along with the change of time, judging as a tap.
6. The method according to claim 1, wherein the step 4 comprises:
tap detection is achieved by monitoring the change of the area of the optical flow of the debris flow area in the ROI area; the optical flow area of the ROI is obtained by accumulating the optical flow speed of all pixels in the ROI, and the value is used for measuring the object motion intensity and speed in the ROI;
the calculation formula of the optical flow area:
where the ROI is the region of interest, (u, v) is the pixel coordinates within the region,and->Representing the horizontal and vertical components of the optical flow in the x and y directions, respectively, < >>Representing the magnitude of the optical flow velocity at pixel (u, v).
7. The method according to any one of claims 1-6, wherein step 4 further comprises, after:
step 5: and determining a velocity measurement normal according to the segmented section information, and calculating the tap speed of the debris flow by combining the pixel moving distance and time of the adjacent frames.
8. The method according to claim 7, wherein the step 5 comprises:
setting a velocity measurement normal line at the section, calculating to obtain a motion field of the debris flow area, sampling on the velocity measurement line to obtain instantaneous displacement of the previous frame and the current frame at the velocity measurement normal line, and calculating to obtain the actual speed according to the frame rate of the video, the mapping relation of the image pixels, the actual distance and the pixel distance and the jump frame number and combining the optical flow area.
9. The method according to claim 7, wherein the step 5 further comprises, after:
step 6: and storing the calculated instantaneous speed of each frame, drawing an ROI area change curve chart after the whole video analysis is completed, and finally, calculating the average flow speed when the debris flow tap occurs by encoding the result frames to synthesize the result video.
CN202311012005.1A 2023-08-11 2023-08-11 Debris flow faucet monitoring method based on video analysis Pending CN117152463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311012005.1A CN117152463A (en) 2023-08-11 2023-08-11 Debris flow faucet monitoring method based on video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311012005.1A CN117152463A (en) 2023-08-11 2023-08-11 Debris flow faucet monitoring method based on video analysis

Publications (1)

Publication Number Publication Date
CN117152463A true CN117152463A (en) 2023-12-01

Family

ID=88885857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311012005.1A Pending CN117152463A (en) 2023-08-11 2023-08-11 Debris flow faucet monitoring method based on video analysis

Country Status (1)

Country Link
CN (1) CN117152463A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117854256A (en) * 2024-03-05 2024-04-09 成都理工大学 Geological disaster monitoring method based on unmanned aerial vehicle video stream analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117854256A (en) * 2024-03-05 2024-04-09 成都理工大学 Geological disaster monitoring method based on unmanned aerial vehicle video stream analysis
CN117854256B (en) * 2024-03-05 2024-06-11 成都理工大学 Geological disaster monitoring method based on unmanned aerial vehicle video stream analysis

Similar Documents

Publication Publication Date Title
Wuest et al. Adaptive line tracking with multiple hypotheses for augmented reality
CN103325112B (en) Moving target method for quick in dynamic scene
RU2484531C2 (en) Apparatus for processing video information of security alarm system
US9947077B2 (en) Video object tracking in traffic monitoring
US10930013B2 (en) Method and system for calibrating imaging system
KR101787542B1 (en) Estimation system and method of slope stability using 3d model and soil classification
CN107169401B (en) Rail invader detection method based on rail visual feature spectrum
CN102222214A (en) Fast object recognition algorithm
CN104376577A (en) Multi-camera multi-target tracking algorithm based on particle filtering
CN117152463A (en) Debris flow faucet monitoring method based on video analysis
CN111881853A (en) Method and device for identifying abnormal behaviors in oversized bridge and tunnel
Lee et al. A new image-quality evaluating and enhancing methodology for bridge inspection using an unmanned aerial vehicle
Guo et al. Surface defect detection of civil structures using images: Review from data perspective
Guo et al. Visibility detection approach to road scene foggy images
KR101690050B1 (en) Intelligent video security system
CN103093481B (en) A kind of based on moving target detecting method under the static background of watershed segmentation
Liu et al. A joint optical flow and principal component analysis approach for motion detection
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
Sincan et al. Moving object detection by a mounted moving camera
CN113628251B (en) Smart hotel terminal monitoring method
CN115909219A (en) Scene change detection method and system based on video analysis
CN105208402A (en) Video frame complexity measurement method based on moving object and image analysis
CN114399532A (en) Camera position and posture determining method and device
KR20220120211A (en) Drone image analysis system based on deep learning for traffic measurement
Xu et al. Monocular video frame optimization through feature-based parallax analysis for 3D pipe reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination