CN110659627A - Intelligent video monitoring method based on video segmentation - Google Patents
Intelligent video monitoring method based on video segmentation Download PDFInfo
- Publication number
- CN110659627A CN110659627A CN201910949694.6A CN201910949694A CN110659627A CN 110659627 A CN110659627 A CN 110659627A CN 201910949694 A CN201910949694 A CN 201910949694A CN 110659627 A CN110659627 A CN 110659627A
- Authority
- CN
- China
- Prior art keywords
- video
- method based
- monitoring method
- picture
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
- G08B25/01—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
- G08B25/08—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B3/00—Audible signalling systems; Audible personal calling systems
- G08B3/10—Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Electromagnetism (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention particularly relates to an intelligent video monitoring method based on video segmentation. The intelligent video monitoring method based on video segmentation is characterized in that a NetWarp structure is constructed on the basis of a convolutional neural network CNN at a computer end, continuous frames Frame in a monitoring video are analyzed, and objects in the video are segmented to achieve the purposes of intrusion detection and real-time monitoring of abnormal conditions; and sending out warning sound to the abnormal condition to inform by mail or short message. According to the intelligent video monitoring method based on video segmentation, the machine is used for completing the work, not only can security personnel be relieved from the complicated and boring task of observing the screen for a long time, the manpower input is reduced, but also key data to be found can be quickly searched in massive video data, and the probability of non-report, false report and false report is greatly reduced.
Description
Technical Field
The invention relates to the technical field of deep learning, in particular to an intelligent video monitoring method based on video segmentation.
Background
In daily life, various illegal events are layered endlessly, and illegal means tend to be more complex along with the development of science and technology. In today's society, the prevention and exploration of illegal violations is largely through the analysis of video surveillance. However, this requires a lot of labor and time, and the prevention before the event occurs requires someone to stare at the video surveillance all the time, and the inspection after the event is issued requires analysis of a lot of video surveillance. Obviously, the traditional means of staring at and checking the video monitor manually cannot meet the requirements of the current social development. Therefore, it is an urgent problem to provide a new and more intelligent video monitoring method.
In order to relive the task of watching the screen for a long time of security personnel from being complicated and boring, the part of work is finished by a machine, and the labor input is reduced; meanwhile, in order to quickly search key data to be found in massive video data and greatly reduce the probability of non-report, false report and false report, the invention provides an intelligent video monitoring method based on video segmentation.
The intelligent video monitoring is to utilize computer vision technology to process, analyze and understand video signals, to locate, identify and track the change in the monitored scene through automatic analysis of sequence images without human intervention, and to analyze and judge the behavior of the target on the basis.
Video is composed of successive pictures from frame to frame. The intelligent video monitoring needs to utilize an image segmentation technology for video monitoring analysis. Image segmentation refers to the process of subdividing a digital image into a plurality of image sub-regions (sets of pixels), also referred to as superpixels. The purpose of image segmentation is to simplify or change the representation of the image so that the image is easier to understand and analyze. With the development of deep learning, image segmentation techniques have achieved good results. But the image segmentation technology cannot be simply utilized in video analysis because the pictures in the video have the characteristic of time-sequence correlation. For example, two frames of pictures in a video are taken, the same person in the two pictures is respectively indoors and outdoors, a technician can hardly judge whether the person enters the room from the inside or the outside by using a picture segmentation method, and the technician can easily judge by using the time sequence characteristic between the video frames.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention provides a simple and efficient intelligent video monitoring method based on video segmentation.
The invention is realized by the following technical scheme:
an intelligent video monitoring method based on video segmentation is characterized by comprising the following steps:
firstly, shooting a monitoring video by using a video monitoring camera;
secondly, transmitting the monitoring video data to a computer end;
thirdly, constructing a NetWarp structure on the basis of a convolutional neural network CNN at a computer end, analyzing continuous frames Frame in the monitoring video, and segmenting objects in the video so as to achieve the purposes of intrusion detection and monitoring abnormal conditions in real time;
fourthly, sending out warning sound to the abnormal condition and carrying out mail or short message notification.
In the third step, the video is segmented, and the method comprises the following steps:
(1) flow Computation
Inputting two continuous frames of pictures ItAnd I(t-1)Calculating the relative offset of each pixel position in two continuous frames of pictures by using DIS-Flow algorithm to obtain a slave frame ItTo I(t-1)Optical flow data of (a);
(2) flow Transformation
FlowCNN pair Slave frame I through convolutional neural networktTo I(t-1)Optical flow data ofConverting to obtain converted optical flow data;
(3) distortion represents Warping reductions
Calculating current frame picture I by converted optical flow datatPixel position mapping to previous frame picture I(t-1)Warping the pixel position to obtain the previous frame of picture I(t-1)The warped convolution kernel of (1);
(4) representing binding of reactions
The previous frame picture I is calculated(t-1)The distortion convolution kernel and the current frame picture ItThe convolution kernels are linearly added, and the addition result is transmitted to the rest image convolution network layers;
(5) intrusion Detection (Intrusion Detection)
Performing difference processing on the segmented video frame images, and judging whether the segmented video frame images are abnormal invasion or not according to a preset threshold; and if the abnormal invasion is found, warning is given.
In the step (1), the offsets of the pixels in the horizontal and vertical directions are represented by a set of floating point numbers μ and ν, respectively, (x ', y') (x + μ, y + ν), where (x, y) represents the picture ItAt each pixel position, (x ', y') represents picture I(t-1)At each pixel location.
Since the optical flow data obtained in step (1) does not represent the propagation behavior between video frames well, it needs to be converted.
In the step (2), the convolutional neural network FlowCNN is connected with the original two channel flows, and the previous frame of picture I(t-1)With the current frame picture ItAnd the difference between the two frames forms an 11-channel tensor (tensor) as input to the convolutional neural network FlowCNN; the convolutional neural network FlowCNN itself consists of 4 convolutional layers using the ReLU nonlinear function, all convolutional layers are composed of 3 × 3 convolutional kernels (filters), and the output channels of the first three layers are 16, 32 and 2 respectively; connecting the output of the third layer with the optical flow data calculated in step (1) as input to the last layer of convolutional layers to obtain the final converted stream data.
All parameters in the convolutional neural network FlowCNN are learned by a standard back propagation algorithm (BackPropagation).
In the step (3), a Net-Warp module is realized on the kth layer of the image convolution neural network, and convolution kernels of two adjacent frames are respectivelyAndfor convenience of presentation, respectively using ZtAnd Z(t-1)To represent; z(t-1)By twisting and ZtAlignment:
wherein the content of the first and second substances,for warped convolution kernels, FtRepresents optical flow information, and Λ (·) represents a FlowCNN network.
The above-mentionedThat is, the current frame picture I is calculated by the converted optical flow datatPixel position (x, y) is mapped to previous frame I(t-1)Warped representation of picture position (x ', y'), implementing Warp () as Z(t-1)Bilinear interpolation of points (x ', y'), using (x)1,y1),(x1,y2),(x2,y1) And (x)2,y2) Represents the corner points of the grid where (x ', y') is located:
in the step (4), the calculated previous frame picture I(t-1)The distortion convolution kernel and the current frame picture ItThe convolution kernel linear addition of (2):
wherein, w1And w2Is a weight vector, length and zkChannel number is the same, an represents scalar multiplication; w is a1And w2The parameters are learned by a standard Back Propagation algorithm (Back Propagation);
In the step (5), the difference processing is performed between the segmented video frame images, and the method comprises the following steps:
d(i,j,t)=F(i,j,t)-F(i,j,t-1)
wherein F (i, j, t) represents the pixel position of t frame (i, j), and F (i, j, t-1) represents the pixel position of t-1 frame.
In the step (5), the threshold value threshold is 30, and if the difference value d (i, j, t) reaches the threshold value threshold, it is determined that an abnormal intrusion has occurred.
The invention has the beneficial effects that: according to the intelligent video monitoring method based on video segmentation, the machine is used for completing the work, not only can security personnel be relieved from the complicated and boring task of observing the screen for a long time, the manpower input is reduced, but also key data to be found can be quickly searched in massive video data, and the probability of non-report, false report and false report is greatly reduced.
Drawings
FIG. 1 is a schematic diagram of an intelligent video monitoring method based on video segmentation.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the embodiment of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The intelligent video monitoring method based on video segmentation comprises the following steps:
firstly, shooting a monitoring video by using a video monitoring camera;
secondly, transmitting the monitoring video data to a computer end;
thirdly, constructing a NetWarp structure on the basis of a convolutional neural network CNN at a computer end, analyzing continuous frames Frame in the monitoring video, and segmenting objects in the video so as to achieve the purposes of intrusion detection and monitoring abnormal conditions in real time;
fourthly, sending out warning sound to the abnormal condition and carrying out mail or short message notification.
In the third step, the video is segmented, and the method comprises the following steps:
(1) flow Computation
Inputting two continuous frames of pictures ItAnd I(t-1)Calculating the relative offset of each pixel position in two continuous frames of pictures by using DIS-Flow algorithm to obtain a slave frame ItTo I(t-1)Optical flow data of (a);
(2) flow Transformation
FlowCNN pair Slave frame I through convolutional neural networktTo I(t-1)Converting the optical flow data to obtain converted optical flow data;
(3) distortion represents Warping reductions
Calculating current frame picture I by converted optical flow datatPixel position mapping to previous frame picture I(t-1)Warping the pixel position to obtain the previous frame of picture I(t-1)The warped convolution kernel of (1);
(4) representing binding of reactions
The previous frame picture I is calculated(t-1)The distortion convolution kernel and the current frame picture ItThe convolution kernels are linearly added, and the addition result is transmitted to the rest image convolution network layers;
(5) intrusion Detection (Intrusion Detection)
Performing difference processing on the segmented video frame images, and judging whether the segmented video frame images are abnormal invasion or not according to a preset threshold; and if the abnormal invasion is found, warning is given.
In the step (1), the offsets of the pixels in the horizontal and vertical directions are represented by a set of floating point numbers μ and ν, respectively, (x ', y') (x + μ, y + ν), where (x, y) represents the picture ItAt each pixel position, (x ', y') represents picture I(t-1)At each pixel location.
Since the optical flow data obtained in step (1) does not represent the propagation behavior between video frames well, it needs to be converted.
In the step (2), the convolutional neural network FlowCNN is connected with the original two channel flows, and the previous frame of picture I(t-1)With the current frame picture ItAnd the difference between the two frames forms an 11-channel tensor (tensor) as input to the convolutional neural network FlowCNN; the convolutional neural network FlowCNN itself consists of 4 convolutional layers using the ReLU nonlinear function, all convolutional layers are composed of 3 × 3 convolutional kernels (filters), and the output channels of the first three layers are 16, 32 and 2 respectively; connecting the output of the third layer with the optical flow data calculated in step (1) as input to the last layer of convolutional layers to obtain the final converted stream data.
All parameters in the convolutional neural network FlowCNN are learned by a standard back propagation algorithm (BackPropagation).
In the step (3), a Net-Warp module is realized on the kth layer of the image convolution neural network, and convolution kernels of two adjacent frames are respectivelyAndfor convenience of presentation, respectively using ZtAnd Z(t-1)To represent; z(t-1)By twisting and ZtAlignment:
wherein the content of the first and second substances,for warped convolution kernels, FtRepresents optical flow information, and Λ (·) represents a FlowCNN network.
The above-mentionedThat is, the current frame picture I is calculated by the converted optical flow datatPixel position (x, y) is mapped to previous frame I(t-1)Warped representation of picture position (x ', y'), implementing Warp () as Z(t-1)Bilinear interpolation of points (x ', y'), using (x)1,y1),(x1,y2),(x2,y1) And (x)2,y2) Represents the corner points of the grid where (x ', y') is located:
in the step (4), the calculated previous frame picture I(t-1)The distortion convolution kernel and the current frame picture ItThe convolution kernel linear addition of (2):
wherein, w1And w2Is a weight vector, length and zkChannel number is the same,/A table scalar multiplication; w is a1And w2The parameters are learned by a standard Back Propagation algorithm (Back Propagation);
Because the quality of the monitoring video is greatly influenced by the video resolution, the background time, the weather and other reasons, the video frames may have larger difference and cannot be directly subjected to difference processing. After the video image is segmented, the influence caused by the change of the video background is greatly eliminated, and the change of people, vehicles and the like is focused.
In the step (5), the difference processing is performed between the segmented video frame images, and the method comprises the following steps:
d(i,j,t)=F(i,j,t)-F(i,j,t-1)
wherein F (i, j, t) represents the pixel position of t frame (i, j), and F (i, j, t-1) represents the pixel position of t-1 frame.
In the step (5), the threshold value threshold is 30, and if the difference value d (i, j, t) reaches the threshold value threshold, it is determined that an abnormal intrusion has occurred.
The above describes an intelligent video monitoring method based on video segmentation in the embodiment of the present invention in detail. While the present invention has been described with reference to specific examples, which are provided to assist in understanding the core concepts of the present invention, it is intended that all other embodiments that can be obtained by those skilled in the art without departing from the spirit of the present invention shall fall within the scope of the present invention.
Claims (10)
1. An intelligent video monitoring method based on video segmentation is characterized by comprising the following steps:
firstly, shooting a monitoring video by using a video monitoring camera;
secondly, transmitting the monitoring video data to a computer end;
thirdly, constructing a NetWarp structure on the basis of a convolutional neural network CNN at a computer end, analyzing continuous frames Frame in the monitoring video, and segmenting objects in the video so as to achieve the purposes of intrusion detection and monitoring abnormal conditions in real time;
fourthly, sending out warning sound to the abnormal condition and carrying out mail or short message notification.
2. The intelligent video monitoring method based on video segmentation as claimed in claim 1, wherein: in the third step, the video is segmented, and the method comprises the following steps:
(1) flow Computation
Inputting two continuous frames of pictures ItAnd I(t-1)Calculating the relative offset of each pixel position in two continuous frames of pictures by using DIS-Flow algorithm to obtain a slave frame ItTo I(t-1)Optical flow data of (a);
(2) flow Transformation
FlowCNN pair Slave frame I through convolutional neural networktTo I(t-1)Converting the optical flow data to obtain converted optical flow data;
(3) distortion represents Warping reductions
Calculating current frame picture I by converted optical flow datatPixel position mapping to previous frame picture I(t-1)Warping the pixel position to obtain the previous frame of picture I(t-1)The warped convolution kernel of (1);
(4) representing binding of reactions
The previous frame picture I is calculated(t-1)The distortion convolution kernel and the current frame picture ItThe convolution kernels are linearly added, and the addition result is transmitted to the rest image convolution network layers;
(5) intrusion Detection
Performing difference processing on the segmented video frame images, and judging whether the segmented video frame images are abnormal invasion or not according to a preset threshold; and if the abnormal invasion is found, warning is given.
3. The intelligent video monitoring method based on video segmentation as claimed in claim 2, wherein: in the step (1), the offsets of the pixels in the horizontal and vertical directions are represented by a set of floating point numbers μ and ν, respectively, (x ', y') (x + μ, y + ν), where (x, y) represents the picture ItAt each pixel position, (x ', y') represents picture I(t-1)At each pixel location.
4. The intelligent video monitoring method based on video segmentation as claimed in claim 3, wherein: in the step (2), the convolutional neural network FlowCNN is connected with the original two channel flows, and the previous frame of picture I(t-1)With the current frame picture ItAnd the difference between the two frames forms an 11-channel tensor (tensor) as input to the convolutional neural network FlowCNN; the convolutional neural network FlowCNN itself consists of 4 convolutional layers using the ReLU nonlinear function, all convolutional layers are composed of 3 × 3 convolutional kernels (filters), and the output channels of the first three layers are 16, 32 and 2 respectively; connecting the output of the third layer with the optical flow data calculated in step (1) as input to the last layer of convolutional layers to obtain the final converted stream data.
5. The intelligent video monitoring method based on video segmentation as claimed in claim 4, wherein: all parameters in the convolutional neural network FlowCNN are learned by a standard back propagation algorithm.
6. The intelligent video monitoring method based on video segmentation as claimed in claim 5, wherein: in the step (3), a Net-Warp module is realized on a k layer of the image convolution neural network, and convolution kernels of two adjacent frames are respectively Zt kAndfor convenience of presentation, respectively using ZtAnd Z(t-1)To represent; z(t-1)By twisting and ZtAlignment:
7. The intelligent video monitoring method based on video segmentation as claimed in claim 6, wherein: the above-mentionedThat is, the current frame picture I is calculated by the converted optical flow datatPixel position (x, y) is mapped to previous frame I(t-1)Warped representation of picture position (x ', y'), implementing Warp () as Z(t-1)Bilinear interpolation of points (x ', y'), using (x)1,y1),(x1,y2),(x2,y1) And (x)2,y2) Represents the corner points of the grid where (x ', y') is located:
8. the intelligent video monitoring method based on video segmentation as claimed in claim 7, wherein: in the step (4), the calculated previous frame picture I(t-1)The distortion convolution kernel and the current frame picture ItThe convolution kernel linear addition of (2):
wherein, w1And w2Is a weight vector, length and zkChannel number is the same, an represents scalar multiplication; w is a1And w2The parameters are learned through a standard back propagation algorithm;
9. The intelligent video monitoring method based on video segmentation as claimed in claim 8, wherein: in the step (5), the difference processing is performed between the segmented video frame images, and the method comprises the following steps:
d(i,j,t)=F(i,j,t)-F(i,j,t-1)
wherein F (i, j, t) represents the pixel position of t frame (i, j), and F (i, j, t-1) represents the pixel position of t-1 frame.
10. The intelligent video monitoring method based on video segmentation as claimed in claim 9, wherein: in the step (5), the threshold value threshold is 30, and if the difference value d (i, j, t) reaches the threshold value threshold, it is determined that an abnormal intrusion has occurred.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910949694.6A CN110659627A (en) | 2019-10-08 | 2019-10-08 | Intelligent video monitoring method based on video segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910949694.6A CN110659627A (en) | 2019-10-08 | 2019-10-08 | Intelligent video monitoring method based on video segmentation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110659627A true CN110659627A (en) | 2020-01-07 |
Family
ID=69040050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910949694.6A Pending CN110659627A (en) | 2019-10-08 | 2019-10-08 | Intelligent video monitoring method based on video segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110659627A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379046A (en) * | 2020-03-09 | 2021-09-10 | 中国科学院深圳先进技术研究院 | Method for accelerated computation of convolutional neural network, storage medium, and computer device |
CN113487618A (en) * | 2021-09-07 | 2021-10-08 | 北京世纪好未来教育科技有限公司 | Portrait segmentation method, portrait segmentation device, electronic equipment and storage medium |
WO2022012002A1 (en) * | 2020-07-15 | 2022-01-20 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for video analysis |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266710A (en) * | 2007-03-14 | 2008-09-17 | 中国科学院自动化研究所 | An all-weather intelligent video analysis monitoring method based on a rule |
US20140348390A1 (en) * | 2013-05-21 | 2014-11-27 | Peking University Founder Group Co., Ltd. | Method and apparatus for detecting traffic monitoring video |
CN107801000A (en) * | 2017-10-17 | 2018-03-13 | 国网江苏省电力公司盐城供电公司 | A kind of transmission line of electricity external force damage prevention intelligent video monitoring system |
CN108009506A (en) * | 2017-12-07 | 2018-05-08 | 平安科技(深圳)有限公司 | Intrusion detection method, application server and computer-readable recording medium |
CN108846335A (en) * | 2018-05-31 | 2018-11-20 | 武汉市蓝领英才科技有限公司 | Wisdom building site district management and intrusion detection method, system based on video image |
-
2019
- 2019-10-08 CN CN201910949694.6A patent/CN110659627A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266710A (en) * | 2007-03-14 | 2008-09-17 | 中国科学院自动化研究所 | An all-weather intelligent video analysis monitoring method based on a rule |
US20140348390A1 (en) * | 2013-05-21 | 2014-11-27 | Peking University Founder Group Co., Ltd. | Method and apparatus for detecting traffic monitoring video |
CN107801000A (en) * | 2017-10-17 | 2018-03-13 | 国网江苏省电力公司盐城供电公司 | A kind of transmission line of electricity external force damage prevention intelligent video monitoring system |
CN108009506A (en) * | 2017-12-07 | 2018-05-08 | 平安科技(深圳)有限公司 | Intrusion detection method, application server and computer-readable recording medium |
CN108846335A (en) * | 2018-05-31 | 2018-11-20 | 武汉市蓝领英才科技有限公司 | Wisdom building site district management and intrusion detection method, system based on video image |
Non-Patent Citations (1)
Title |
---|
R.GADDE, ET AL.: "Semantic Video CNNs through RepresentationWarping", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379046A (en) * | 2020-03-09 | 2021-09-10 | 中国科学院深圳先进技术研究院 | Method for accelerated computation of convolutional neural network, storage medium, and computer device |
WO2022012002A1 (en) * | 2020-07-15 | 2022-01-20 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for video analysis |
CN113487618A (en) * | 2021-09-07 | 2021-10-08 | 北京世纪好未来教育科技有限公司 | Portrait segmentation method, portrait segmentation device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104966304B (en) | Multi-target detection tracking based on Kalman filtering and nonparametric background model | |
CN109711318B (en) | Multi-face detection and tracking method based on video stream | |
CN110232380A (en) | Fire night scenes restored method based on Mask R-CNN neural network | |
CN110659627A (en) | Intelligent video monitoring method based on video segmentation | |
CN112396635B (en) | Multi-target detection method based on multiple devices in complex environment | |
EP4090036A1 (en) | Privacy shielding processing method and apparatus, electronic device, and monitoring system | |
CN110096945B (en) | Indoor monitoring video key frame real-time extraction method based on machine learning | |
CN110751630A (en) | Power transmission line foreign matter detection method and device based on deep learning and medium | |
CN112257643A (en) | Smoking behavior and calling behavior identification method based on video streaming | |
JP3486229B2 (en) | Image change detection device | |
CN111582074A (en) | Monitoring video leaf occlusion detection method based on scene depth information perception | |
CN111369548A (en) | No-reference video quality evaluation method and device based on generation countermeasure network | |
CN111369557B (en) | Image processing method, device, computing equipment and storage medium | |
CN105930814A (en) | Method for detecting personnel abnormal gathering behavior on the basis of video monitoring platform | |
CN116740110A (en) | Image edge measuring system based on secondary determination | |
CN111147815A (en) | Video monitoring system | |
CN112532999B (en) | Digital video frame deletion tampering detection method based on deep neural network | |
CN111797761A (en) | Three-stage smoke detection system, method and readable medium | |
CN112183310A (en) | Method and system for filtering redundant monitoring pictures and screening invalid monitoring pictures | |
CN113052878A (en) | Multi-path high-altitude parabolic detection method and system for edge equipment in security system | |
EP4136832A1 (en) | Enhanced person detection using face recognition and reinforced, segmented field inferencing | |
CN112307936A (en) | Passenger flow volume analysis method, system and device based on head and shoulder detection | |
CN113158963B (en) | Method and device for detecting high-altitude parabolic objects | |
CN111191593A (en) | Image target detection method and device, storage medium and sewage pipeline detection device | |
CN117011288B (en) | Video quality diagnosis method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200107 |