CN112449155A - Video monitoring method and system for protecting privacy of personnel - Google Patents
Video monitoring method and system for protecting privacy of personnel Download PDFInfo
- Publication number
- CN112449155A CN112449155A CN202011132383.XA CN202011132383A CN112449155A CN 112449155 A CN112449155 A CN 112449155A CN 202011132383 A CN202011132383 A CN 202011132383A CN 112449155 A CN112449155 A CN 112449155A
- Authority
- CN
- China
- Prior art keywords
- portrait
- picture
- privacy
- current picture
- behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000006399 behavior Effects 0.000 claims abstract description 59
- 238000004458 analytical method Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000000474 nursing effect Effects 0.000 claims abstract description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 238000000605 extraction Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 230000000306 recurrent effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 abstract description 13
- 230000009545 invasion Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 14
- 238000011161 development Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
- Alarm Systems (AREA)
Abstract
The invention discloses a video monitoring method and a system for protecting personnel privacy, wherein the method comprises the steps of obtaining a real-time picture from a video stream; detecting whether the current picture contains a portrait, and if so, further identifying the area where the portrait is located; tracking the portrait in front and back continuous frame pictures to obtain the accurate position of the portrait in the current picture; respectively carrying out fuzzy nursing and behavior analysis on the portrait in the picture, and if a specific behavior is judged, sending alarm information; the pictures after the fuzzy processing are synthesized into a video stream and finally output to a monitoring terminal. The invention can carry out video analysis on the camera in real time and carry out detection or identification tasks of related requirements, and can automatically send out an alarm when the personnel in the video stream have specific behaviors, thereby improving the detection efficiency and saving the manpower; according to the invention, privacy zone detection and fuzziness are automatically carried out in the camera, and the result after fuzziness is exposed to monitoring personnel, so that the invasion of citizenship is effectively prevented.
Description
Technical Field
The invention relates to the technical field of information processing, in particular to a video monitoring method and a video monitoring system for protecting personnel privacy.
Background
Since the 21 st century, the rapid development of internet technology has promoted the development of the big data era, so that our life style is more convenient and faster. However, not only these technologies are brought about, but also the data acquirer and the data provider are developed in the data era, and the security of the big data includes not only privacy security but also property security, and the security protection of the big data becomes more important as the data provider. Some scenarios, such as forensic evidence, criminal activity prevention, etc., require privacy violation under certain conditions. Moreover, the enormous amount of data makes it impractical to manually filter privacy concerns. In recent years, with continuous innovation and development of intelligent technologies, it becomes possible to protect data security and avoid invasion of individual privacy by using the intelligent technologies.
The data is protected in real time by using the technology, the cost is reduced without manpower, and the method has privacy safety. The rights and interests of the user can be protected by utilizing the technology in some scenes, if the camera is installed for manual protection, the personal privacy of the user can be invaded, and the privacy problem can be avoided by reducing manual participation through the intelligent technology. For example, some lawbreakers install pinhole cameras in the time slots of hotel stay to steal our privacy, and passengers carry along the articles in the rooms to cause property loss of the hotel. The pinhole camera is checked by hotel managers, so that huge workload is brought, and the check before leaving the store can lead passengers to wait for a long time to reduce the service quality of the hotel. The camera is installed to monitor so as to prevent illegal persons from installing the pinhole camera and passengers from taking articles away, so that a better solution is provided, the privacy of the passengers can be invaded, and the privacy protection and the property safety become conflicting contradictions. If the artificial intelligence is used for behavior analysis, the privacy of the user can be well protected.
Disclosure of Invention
According to at least one technical scheme provided by the invention, the personnel behaviors can be analyzed by a technology which does not infringe the privacy of the user, and the method is used for protecting the privacy of the personnel in the monitoring video.
In order to achieve the above purpose, the present invention provides a video monitoring method and system for protecting privacy of people, which is specifically implemented by the following technical solutions.
In a first aspect, the present invention provides a video monitoring method for protecting privacy of people, comprising the following steps of obtaining a real-time picture from a video stream; detecting whether the current picture contains a portrait, and if so, further identifying the area where the portrait is located; tracking the portrait in front and back continuous frame pictures to obtain the accurate position of the portrait in the current picture; respectively carrying out fuzzy nursing and behavior analysis on the portrait in the picture, and if a specific behavior is judged, sending alarm information; the pictures after the fuzzy processing are synthesized into a video stream and finally output to a monitoring terminal.
As a preferred embodiment, the detecting whether each frame of picture contains a portrait is implemented by a convolutional neural network, specifically:
extracting the characteristic attribute of the picture by using a convolutional neural network to generate a characteristic graph; the two-dimensional space dimensions of the feature maps at different depths are different, and the planar dimension of the feature maps is continuously reduced due to the down-sampling effect of the pooling layer of the convolutional neural network; adding the feature maps of different depths to a set of feature maps; and classifying points on the feature maps in the feature map group, judging whether the points belong to the region where the portrait is located, and converting the part of the original real-time picture corresponding to the feature maps by utilizing the spatial invariance of the convolutional neural network.
As a preferred embodiment, the tracking the portrait in the front and back continuous frame pictures to obtain the accurate position of the portrait in the current picture specifically is to extract an optical flow graph of the current picture; sending the light flow graph and the original graph of the current picture to a convolutional neural network for feature extraction; performing equal ratio cutting on the original image of the current picture, wherein the obtained picture blocks respectively correspond to the areas corresponding to the original image of the current picture one by one; extracting features in the picture blocks; if no portrait is detected in the previous frame of picture, directly acquiring the position of the portrait in the current picture, otherwise, performing region association on the features in the picture blocks of the current picture and the features of the picture blocks obtained in the previous frame of picture; and searching the picture block with the strongest correlation with the portrait as the position of the portrait in the current picture.
As a preferred embodiment, before performing the behavior analysis on the portrait in the picture, the following processes are further included: cutting the region where the portrait is from the real-time picture as a behavior analysis object; extracting the characteristics of the behavior analysis object to obtain characteristic values; and sending the characteristic value into a characteristic value queue.
Further, the behavior analysis specifically includes that the characteristic values sequentially enter a convolutional recurrent neural network from the characteristic value queue according to a first-in first-out principle; extracting and combining the characteristic values; and classifying the combined characteristic attributes, and judging whether the characteristic attributes belong to preset specific behaviors.
In a second aspect of the present invention, a video monitoring system for protecting privacy of people is provided, which includes a picture collector for obtaining a real-time picture from a video stream; the portrait detector is used for detecting whether each frame of picture contains a portrait or not, and if so, further identifying the area where the portrait is located; the image tracker is used for tracking the image in the front and back continuous frame pictures to acquire the accurate position of the image in the current picture; the portrait blurrer is used for performing blurring processing on the area where the portrait is located; and the behavior analyzer is used for analyzing the area where the portrait is located in each frame of picture and judging whether a specific behavior exists.
In a preferred embodiment, the picture collector, the portrait detector, the portrait tracker and the portrait blurring device are located in a camera.
In a preferred embodiment, the human image tracker includes a region associator configured to perform region association between features in the picture blocks of a current picture and features of the picture blocks obtained in a previous frame of picture.
In a preferred embodiment, the system further comprises an alarm connected with the behavior analyzer and used for sending alarm information when the portrait in the video stream has a specific behavior.
The invention can carry out video analysis on the camera in real time and carry out detection or identification tasks of related requirements, and can automatically send out an alarm when specific behaviors of personnel in the video stream are detected, so that manual monitoring is not required all the time, the detection efficiency is improved, the manpower is saved, and the illegal crime cost is increased; according to the invention, privacy zone detection and fuzziness are automatically carried out in the camera, and the result after fuzziness is exposed to monitoring personnel, so that the invasion of citizenship is effectively prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a video monitoring method for protecting privacy of a person according to an embodiment of the present invention;
FIG. 2 is a flow chart of processing for multiple detection frames during portrait detection in the embodiment of FIG. 1;
FIG. 3 is a flowchart of portrait tracking in the embodiment of FIG. 1;
FIG. 4 is a flow chart of behavior determination in the embodiment of FIG. 1;
fig. 5 is a schematic block diagram of a video surveillance system for protecting privacy of people according to another embodiment of the present invention.
Detailed Description
With the wide application of video monitoring and image processing technologies, the problem of personal privacy in video monitoring and image processing technologies is more and more emphasized, and how to properly protect the privacy of citizens and identify specific behaviors in monitoring pictures becomes a problem which needs to be solved urgently. The invention provides the technical scheme, and the personnel behavior can be analyzed by a technology which does not infringe the privacy of the user, so that the personnel behavior can be used for privacy protection and property protection. The problem that the contradiction between camera shooting monitoring and citizen privacy is caused in some places in real life can be solved, and the problem that the privacy of a person is prevented from being invaded when a camera is adjusted in the investigation and evidence obtaining process can be solved. The video monitoring personnel illegal action can be automatically monitored through the personnel video behavior analysis technology, and in addition, more importantly, the personnel who appear can be fuzzified and processed through the intelligent technology, so that the privacy of citizens is protected.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Fig. 1 shows a flow of a video monitoring method according to an embodiment of the present invention, which includes the steps of:
and S1, acquiring real-time pictures from the video stream, and performing subsequent operation on each frame of picture.
And S2, detecting whether the current picture contains the portrait, and if so, further identifying the area where the portrait is located.
Generally, detecting a portrait in a picture is realized by adopting a deep learning technology, and more preferably, detecting a target by using the invariance of the spatial relative position of a convolutional network, which comprises the following specific processes:
s21, extracting the characteristic attribute of the picture by using a convolutional neural network to generate a characteristic diagram;
s22, the two-dimensional space dimensions of the feature maps at different depths are different, and as the depth is increased, the planar dimension of the feature maps is continuously reduced due to the effect of the pooling layer down-sampling of the convolutional neural network; adding the feature maps of different depths to a set of feature maps.
There are usually a plurality of portrait check boxes that are positioned to the original image by this step. Generally, the closer to the real human body frame, the more target detection frames are acquired. At this time, the redundant target detection frames need to be removed by using a non-maximum suppression algorithm, so as to obtain a more accurate region where the portrait is located. Therefore, firstly, a frame regression prediction algorithm is used for carrying out regression on a target frame projected to an original image to obtain a regression prediction frame, and then an area where a portrait finally accords with reality is obtained through frame regression and a non-maximum suppression algorithm. This process is illustrated in fig. 2.
And S23, classifying the points on the feature maps in the feature map group, judging whether the points belong to the region where the portrait is located, and converting the part of the original real-time picture corresponding to the feature map by utilizing the spatial invariance of the convolutional neural network.
And S3, tracking the portrait in the front and back continuous frame pictures, and acquiring the accurate position of the portrait in the current picture.
In some embodiments, this step includes the following process. As shown in fig. 3, a light flow graph is first obtained by extracting a current picture by using an optical flow method, and the light flow graph includes motion information of the picture. And then, the original picture and the optical flow graph are sent to a convolutional neural network for feature extraction, and because the original picture and the optical flow graph respectively have static features and dynamic features, a tracking result with stronger representation capability can be obtained through the combination.
According to the space invariance of the convolution network, carrying out geometric proportion cutting on the original image of the current image, wherein the obtained image blocks have one-to-one correspondence with corresponding areas of the original image; and extracting features in the picture blocks by using a convolutional neural network.
And if no human image is detected in the previous frame of picture, directly acquiring the position of the human image in the current picture, otherwise, performing region association on the features in the picture blocks of the current picture and the features of the picture blocks obtained in the previous frame of picture.
And searching the picture block with the strongest correlation with the portrait as the position of the portrait in the current picture.
S4, respectively carrying out fuzzy nursing and behavior analysis on the portrait, and if a specific behavior is judged, sending alarm information; the pictures after the fuzzy processing are synthesized into a video stream and finally output to a monitoring terminal.
In some embodiments, performing behavior analysis specifically includes portrait behavior feature extraction and behavior judgment.
The human image behavior feature extraction specifically comprises the following steps: extracting the characteristics of an object in a picture of a region where the current portrait is located, and acquiring characteristic values; and sending the characteristic value into a characteristic value queue.
The behavior determination process is as shown in fig. 4, the feature values sequentially enter a convolutional Recurrent Neural network crnn (convolutional Recurrent Neural network) from the feature value queue according to a first-in first-out principle, then the feature values are extracted and combined, the combined feature attributes are classified, and whether the behavior belongs to the preset specific behavior is determined. The specific behaviors can be illegal behaviors and other behaviors needing supervision, and softmax is a classification function. The step is realized by adopting a convolution recurrent neural network, and the accuracy of automatically identifying the specific behaviors by the system can be improved along with continuous deep learning.
In fig. 4, the part inside the dashed line frame is the internal processing process of the monitoring system, and is invisible to the monitoring personnel, if the specific behavior occurs in the human object in the video stream through the above judgment, the monitoring device outputs alarm information through sound, text or any other form that can attract the attention of the monitoring personnel, and if the specific behavior does not exist, the monitoring terminal normally plays the monitoring video after the fuzzy processing.
According to the embodiment, video analysis can be carried out on the camera in real time, detection or identification tasks of related requirements can be carried out, when specific behaviors of people in the video stream are detected, an alarm can be automatically sent out, manual monitoring is not required all the time, the detection efficiency is improved, manpower is saved, and the illegal crime cost is increased; the embodiment automatically detects and blurs the privacy area inside the camera, exposes the result after the blurriness to monitoring personnel, and effectively prevents the citizen privacy from being invaded.
Example 2
The embodiment provides a video monitoring system for protecting privacy of people, as shown in fig. 5, which includes a picture collector, configured to obtain a real-time picture from a video stream; the portrait detector is used for detecting whether each frame of picture contains a portrait or not, and if so, further identifying the area where the portrait is located; the image tracker is used for tracking the image in the front and back continuous frame pictures to acquire the accurate position of the image in the current picture; the human image fuzzifier is used for carrying out fuzzy processing on the region where the human image is located; and the behavior analyzer is used for analyzing the area where the portrait is located in each frame of picture and judging whether specific behaviors exist or not.
The process of the portrait detector for portrait detection is as follows: extracting the characteristic attribute of the picture by using a convolutional neural network to generate a characteristic graph;
the two-dimensional space dimensions of the feature maps at different depths are different, and as the depth is increased, the planar dimension of the feature maps is continuously reduced due to the down-sampling effect of the pooling layer of the convolutional neural network; adding the feature maps of different depths to a set of feature maps.
The portrait tracker is connected with the portrait detector and used for tracking the portrait in the front and back continuous frame pictures and correcting the position of the portrait by using the portrait detector. The portrait tracker tracks the portrait in the front and back continuous frame pictures and corrects the position of the portrait in the pictures. And (4) judging whether the area where the portrait is located in the picture is superposed with the tracked target, if so, correcting the position of the tracked target, and otherwise, creating the tracked target of the portrait in the picture.
To achieve the above purpose, the portrait tracker of the present embodiment usually further includes an area correlator. Specifically, in the human image tracking process, firstly, an optical flow graph of a current picture is extracted, and then the optical flow graph and an original graph of the current picture are sent to a convolutional neural network for feature extraction; and then, carrying out equal ratio cutting on the original image of the current image, respectively corresponding the obtained image blocks to the corresponding areas of the original image of the current image one by one, and then extracting the features in the image blocks. And if no human image is detected in the previous frame of picture, directly acquiring the position of the human image in the current picture, otherwise, performing region association on the features in the picture blocks of the current picture and the features of the picture blocks obtained in the previous frame of picture, and searching the picture blocks with the strongest association with the human image as the position of the human image in the current picture.
In some specific embodiments, more than one tracked portrait is obtained, then feature extraction is performed on the portraits in the same picture by using a convolutional neural network, the extracted features and a feature map extracted from a previous frame of picture are sent to an area correlator, the area correlator searches for the relationship between the extracted features and the picture blocks of the original picture, and finally the tracked target portrait is determined.
The behavior analyzer is used for human image behavior feature extraction and behavior judgment.
The human image behavior feature extraction specifically comprises the following steps: extracting the characteristics of an object in a picture of a region where the current portrait is located, and acquiring characteristic values; and sending the characteristic value into a characteristic value queue.
The behavior judgment process comprises the following steps: and the characteristic values sequentially enter the convolution recurrent neural network from the characteristic value queue according to a first-in first-out principle, then the characteristic values are extracted and combined, the combined characteristic attributes are classified, and whether the characteristic values belong to preset specific behaviors or not is judged.
Preferably, the picture collector, the portrait detector, the portrait tracker and the portrait blurring device are located in the camera. Therefore, the video stream output from the camera is subjected to fuzzy processing, so that original images are prevented from being intercepted and tampered by someone, the data transmission safety is improved, and privacy protection is carried out on personnel information in video monitoring.
The monitoring system also comprises an alarm connected with the behavior analyzer, and when the portrait in the video stream has a specific behavior, the alarm information is output through sound, characters or any other forms which can attract the attention of the monitoring personnel.
The alarm can be a stand-alone device or a functional component embedded in the monitoring terminal, and can be implemented in the form of hardware or software, such as eye-catching characters appearing on a monitoring screen. In short, the alarm device can be regarded as the alarm device in the embodiment as long as the alarm device can play a certain reminding role.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the technical solution of the embodiment.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (9)
1. A video monitoring method for protecting personnel privacy is characterized by comprising the following steps,
acquiring a real-time picture from a video stream;
detecting whether the current picture contains a portrait, and if so, further identifying the area where the portrait is located;
tracking the portrait in front and back continuous frame pictures to obtain the accurate position of the portrait in the current picture;
respectively carrying out fuzzy nursing and behavior analysis on the portrait in the picture, and if a specific behavior is judged, sending alarm information;
the pictures after the fuzzy processing are synthesized into a video stream and finally output to a monitoring terminal.
2. The video monitoring method for protecting privacy of people according to claim 1, wherein the detecting whether each frame of picture contains the portrait is realized through a convolutional neural network, and specifically comprises:
extracting the characteristic attribute of the picture by using a convolutional neural network to generate a characteristic graph;
the two-dimensional space dimensions of the feature maps at different depths are different, and the planar dimension of the feature maps is continuously reduced due to the down-sampling effect of the pooling layer of the convolutional neural network;
adding the feature maps of different depths to a set of feature maps;
and classifying the feature points on the feature maps in the feature map group, judging whether the feature points belong to the region where the portrait is located, and converting the part of the original real-time picture corresponding to the feature maps by utilizing the spatial invariance of the convolutional neural network.
3. The video monitoring method for protecting privacy of people according to claim 1, wherein the method for monitoring privacy of people according to claim 1, wherein tracking the portrait in the consecutive frames of pictures to obtain the accurate position of the portrait in the current picture is specifically,
extracting an optical flow graph of a current picture;
sending the light flow graph and the original graph of the current picture to a convolutional neural network for feature extraction;
performing equal ratio cutting on the original image of the current picture, wherein the obtained picture blocks respectively correspond to the areas corresponding to the original image of the current picture one by one;
extracting features in the picture blocks;
if no portrait is detected in the previous frame of picture, directly acquiring the position of the portrait in the current picture, otherwise, performing region association on the features in the picture blocks of the current picture and the features of the picture blocks obtained in the previous frame of picture;
and searching the picture block with the strongest correlation with the portrait as the position of the portrait in the current picture.
4. The video monitoring method for protecting privacy of people according to claim 1, further comprising the following steps before the behavior analysis of the portrait in the picture:
cutting the region where the portrait is from the real-time picture as a behavior analysis object;
extracting the characteristics of the behavior analysis object to obtain characteristic values;
and sending the characteristic value into a characteristic value queue.
5. Video surveillance method for protecting privacy of people according to claim 4, characterized in that the behavior analysis is in particular,
the characteristic values enter a convolution recurrent neural network from the characteristic value queue in sequence according to a first-in first-out principle;
extracting and combining the characteristic values;
and classifying the combined characteristic attributes, and judging whether the characteristic attributes belong to preset specific behaviors.
6. A video monitoring system for protecting personnel privacy is characterized by comprising
The picture collector is used for obtaining a real-time picture from the video stream;
the portrait detector is used for detecting whether each frame of picture contains a portrait or not, and if so, further identifying the area where the portrait is located;
the image tracker is used for tracking the image in the front and back continuous frame pictures to acquire the accurate position of the image in the current picture;
the portrait blurrer is used for performing blurring processing on the area where the portrait is located;
and the behavior analyzer is used for analyzing the area where the portrait is located in each frame of picture and judging whether a specific behavior exists or not.
7. The video surveillance system for protecting privacy of people according to claim 6, wherein the picture collector, the portrait detector, the portrait tracker, and the portrait fuzzifier are located in a camera.
8. The video monitoring system for protecting privacy of people according to claim 6, wherein the portrait tracker includes a region associator for performing region association between features in the picture blocks of a current picture and features of the picture blocks obtained in a previous frame of picture.
9. The video surveillance system for protecting privacy of people as claimed in claim 6, further comprising an alarm connected to the behavior analyzer for giving an alarm message when the portrait in the video stream has a specific behavior.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011132383.XA CN112449155A (en) | 2020-10-21 | 2020-10-21 | Video monitoring method and system for protecting privacy of personnel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011132383.XA CN112449155A (en) | 2020-10-21 | 2020-10-21 | Video monitoring method and system for protecting privacy of personnel |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112449155A true CN112449155A (en) | 2021-03-05 |
Family
ID=74735935
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011132383.XA Pending CN112449155A (en) | 2020-10-21 | 2020-10-21 | Video monitoring method and system for protecting privacy of personnel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112449155A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067236A (en) * | 2021-10-28 | 2022-02-18 | 中国电子科技集团公司电子科学研究院 | Target person information detection device, detection method and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610396A (en) * | 2008-06-16 | 2009-12-23 | 北京智安邦科技有限公司 | Intellective video monitoring device module and system and method for supervising thereof with secret protection |
US20130108105A1 (en) * | 2011-10-31 | 2013-05-02 | Electronics And Telecommunications Research Institute | Apparatus and method for masking privacy region based on monitored video image |
CN108039008A (en) * | 2017-12-29 | 2018-05-15 | 英华达(南京)科技有限公司 | Intelligent video monitoring method, apparatus and system |
CN108363997A (en) * | 2018-03-20 | 2018-08-03 | 南京云思创智信息科技有限公司 | It is a kind of in video to the method for real time tracking of particular person |
CN108647599A (en) * | 2018-04-27 | 2018-10-12 | 南京航空航天大学 | In conjunction with the Human bodys' response method of 3D spring layers connection and Recognition with Recurrent Neural Network |
CN111784735A (en) * | 2020-04-15 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Target tracking method, device and computer readable storage medium |
-
2020
- 2020-10-21 CN CN202011132383.XA patent/CN112449155A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610396A (en) * | 2008-06-16 | 2009-12-23 | 北京智安邦科技有限公司 | Intellective video monitoring device module and system and method for supervising thereof with secret protection |
US20130108105A1 (en) * | 2011-10-31 | 2013-05-02 | Electronics And Telecommunications Research Institute | Apparatus and method for masking privacy region based on monitored video image |
CN108039008A (en) * | 2017-12-29 | 2018-05-15 | 英华达(南京)科技有限公司 | Intelligent video monitoring method, apparatus and system |
CN108363997A (en) * | 2018-03-20 | 2018-08-03 | 南京云思创智信息科技有限公司 | It is a kind of in video to the method for real time tracking of particular person |
CN108647599A (en) * | 2018-04-27 | 2018-10-12 | 南京航空航天大学 | In conjunction with the Human bodys' response method of 3D spring layers connection and Recognition with Recurrent Neural Network |
CN111784735A (en) * | 2020-04-15 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Target tracking method, device and computer readable storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067236A (en) * | 2021-10-28 | 2022-02-18 | 中国电子科技集团公司电子科学研究院 | Target person information detection device, detection method and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015166612A1 (en) | Image analysis system, image analysis method, and image analysis program | |
US20070122000A1 (en) | Detection of stationary objects in video | |
CN103020275A (en) | Video analysis method based on video abstraction and video retrieval | |
Kongurgsa et al. | Real-time intrusion—detecting and alert system by image processing techniques | |
KR20090044957A (en) | Theft and left baggage survellance system and meothod thereof | |
Lin et al. | Real-time active tampering detection of surveillance camera and implementation on digital signal processor | |
CN112449155A (en) | Video monitoring method and system for protecting privacy of personnel | |
US20230316763A1 (en) | Few-shot anomaly detection | |
CN114764895A (en) | Abnormal behavior detection device and method | |
KR101929212B1 (en) | Apparatus and method for masking moving object | |
KR100920937B1 (en) | Apparatus and method for detecting motion, and storing video within security system | |
CN111753587A (en) | Method and device for detecting falling to ground | |
CN113537165B (en) | Detection method and system for pedestrian alarm | |
CN111325185B (en) | Face fraud prevention method and system | |
Kaur | Background subtraction in video surveillance | |
Naurin et al. | A proposed architecture to suspect and trace criminal activity using surveillance cameras | |
Esan et al. | A computer vision model for detecting suspicious behaviour from multiple cameras in crime hotspots using convolutional neural networks | |
CN113627383A (en) | Pedestrian loitering re-identification method for panoramic intelligent security | |
Cabanto et al. | Real-time multi-person smoking event detection | |
CN117456610B (en) | Climbing abnormal behavior detection method and system and electronic equipment | |
US20210350138A1 (en) | Method to identify affiliates in video data | |
CN112598704A (en) | Target positioning and tracking method and system for public place | |
KR20220167561A (en) | Method of extracting objects of interest from CCTV images | |
Devi et al. | Dynamic Abandoned Object Detector Through Camera Surveillance System | |
KR20230012171A (en) | Object detection and identification system for TOD in incidental images based on deep learning and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210305 |
|
RJ01 | Rejection of invention patent application after publication |