CN114863337A - Novel screen anti-photographing recognition method - Google Patents

Novel screen anti-photographing recognition method Download PDF

Info

Publication number
CN114863337A
CN114863337A CN202210493502.7A CN202210493502A CN114863337A CN 114863337 A CN114863337 A CN 114863337A CN 202210493502 A CN202210493502 A CN 202210493502A CN 114863337 A CN114863337 A CN 114863337A
Authority
CN
China
Prior art keywords
image
photographing
video frame
model
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210493502.7A
Other languages
Chinese (zh)
Inventor
陶冠宏
范振军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Days Austrian Group Co ltd
Original Assignee
Chengdu Days Austrian Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Days Austrian Group Co ltd filed Critical Chengdu Days Austrian Group Co ltd
Priority to CN202210493502.7A priority Critical patent/CN114863337A/en
Publication of CN114863337A publication Critical patent/CN114863337A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a novel screen anti-photographing identification method, which comprises the following steps: acquiring an image to be recognized, generating a semantic graph, detecting an example target, recognizing a photographing behavior, and performing post-processing. The invention can realize the illegal shooting real-time detection and identification of the high-precision illegal shooting device under the complex background, and when a person is detected to illegally shoot by using equipment such as a mobile phone, a camera, a video camera and the like, the invention can trigger the computer to immediately hide the display content, namely, lock the screen, and simultaneously record the image information of the photographer, thereby not only effectively preventing the screen information from being shot and stolen, but also recording, tracing and verifying the illegal shooting behavior, and greatly improving the intelligent level of the confidentiality and protection work under the new situation.

Description

Novel screen anti-photographing recognition method
Technical Field
The invention relates to a novel screen anti-photographing recognition method, and belongs to the technical field of computer vision.
Background
In recent years, with the rapid popularization and mass use of intelligent terminal devices such as mobile phones and tablet computers, screen stealing and disclosure have become serious disaster areas for management and control of disclosure loss in information security work of national defense security-related units, security departments, key industries and enterprise units. Although the security protection strategy of the computer equipment is basically mature at present, the technical means for preventing the screen from being stolen is still weak. The existing anti-photographing technical means mainly rely on image acquisition equipment to acquire video streams in real time and identify and detect photographing behaviors. The existing hand-held steal illumination device detection is mainly realized based on a target detection technology, namely, a person and the hand-held steal illumination device are found out in an image, the position and the size of the person and the hand-held steal illumination device are determined, the position and the size are influenced by a complex environment, various objects have different appearances and forms under different environments, the interference of factors such as illumination, shielding and coexistence of similar objects during imaging is added, the problems of easy misrecognition and low recognition precision exist in screen steal illumination detection, and the requirement of an actual scene cannot be met.
Therefore, in order to solve the above problems, a new screen anti-photo recognition algorithm is needed.
Disclosure of Invention
In order to solve the technical problems, the invention provides a novel screen anti-photographing identification method, which solves the problems of easy error identification and low precision of the existing anti-photographing identification method under a complex background, and can effectively solve the interference of the complex background on the identification precision and remarkably improve the photographing identification accuracy rate so as to improve the intelligent level of preventing divulgence.
The invention is realized by the following technical scheme.
The invention provides a novel screen anti-photographing identification method, which comprises the following steps:
acquiring an image to be identified: collecting video frame images from a USB plug-and-play camera installed on a screen;
generating a semantic graph: acquiring a video frame image to be identified, and removing background information in the image by adopting an end-to-end image segmentation model to obtain a foreground image only retaining an interested object;
example target detection: detecting the camera mobile phone in the semantic graph by adopting a target detection model of a fine tuning single stage to determine whether a mobile phone exists;
fourthly, recognizing the photographing behavior: adopting a classification model to carry out photographing behavior recognition on the semantic graph and determining whether a photographing behavior exists;
and fifthly, post-treatment: and performing fusion analysis on the target detection result and the photographing recognition classification result, performing screen locking and evidence preservation operation if a photographing action occurs, otherwise, returning to the step I, and continuously processing subsequent video frame images.
In the second step, the foreground image of the object of interest only includes the person and the mobile phone.
In the step I, the shot work station area monitoring picture is taken as a main object of subsequent processing.
The step II comprises the following steps:
(2.1) extracting a video frame image mask: image segmentation model U with input size of 320 × 3 2 Net extracts the mask of the video frame image, the mask is a gray scale image without background information, and the pixel values of the foreground and the background are respectively represented by 255 and 0;
(2.2) converting the gray mask image into a color mask image: carrying out difference operation on the gray mask image and the original image to obtain a color mask image;
(2.3) generating a semantic graph: and respectively carrying out pixel value replacement on the color mask image and the original image in RGB three channels to obtain a semantic image of the original video frame image, wherein the semantic image only retains the interested object.
In the step (2.3), the pixel values are replaced by:
the background pixel value in the color mask image is replaced by 255, and the foreground pixel value is replaced by the corresponding pixel value of the original image.
In the third step, a fine-tuning single-stage YOLO-v5 target detection model is adopted to perform target detection on the semantic graph of the video frame image to be recognized, and the detected result of the interested target cell-phone is used as prior information of a subsequent photographing behavior recognition stage.
The YOLO-v5 target detection model is obtained after fine tuning training on a self-built target detection data set phone-detection, the input size of the model is 640 x 3, and the number of training rounds is 200.
The self-built target detection data set comprises:
and extracting images labeled as person, cell-phone and cup from the common data set of coco2017 to generate a sub-coco17 subset, and carrying out manual labeling on the collected video frame image data by the type of person and cell-phone and then uniformly fusing the images.
In the fourth step, the constructed photo-rec-model for recognizing the photographing behavior of the semantic graph of the video frame image to be recognized is adopted, the recognition result and the input prior information are comprehensively distinguished, and the recognition result and the input prior information are output to a subsequent anti-photographing post-processing stage.
The photo-rec-model is obtained by taking Resnet-Inceptionv2 as a basic model, adding a two-classification model consisting of a pooling layer, a convolution layer, a Dropout layer, a full-connection layer and an output layer, and performing fine tuning training on a self-built photo-rec data set, wherein the model inputs RGB images with the size of 229 x 3, and the number of training rounds is 150;
the self-built photographing data set comprises the following steps:
the method comprises the steps of adopting a USB plug-and-play camera to collect video frame image data of three situations, namely using a mobile phone, not using the mobile phone and photographing by the mobile phone in an office work station scene, and forming non-photographing and photographing according to two types.
The invention has the beneficial effects that: the device can realize high accuracy under the complicated background and steal the illegal and shoot real-time detection and discernment, when detecting that someone shoots with equipment such as cell-phone, camera in violation of rules and regulations, can trigger the computer and hide the display content immediately, screen lock promptly, note the image information of the people of shooing simultaneously, not only can prevent effectively that screen information from being shot and stealing, can be to the illegal action of shooing record, trace to the source and verify, greatly improve the intelligent level of the work of taking precautions against secretly under the new situation.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of semantic graph generation;
fig. 3 is a diagram of an actual scene verification result according to the embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described below, but the scope of the claimed invention is not limited to the described.
As shown in fig. 1 and 2, a novel screen anti-photo recognition method includes the following steps:
acquiring an image to be identified: collecting video frame images from a USB plug-and-play camera installed on a screen;
generating a semantic graph: acquiring a video frame image to be identified, and removing background information in the image by adopting an end-to-end image segmentation model to obtain a foreground image only retaining an interested object;
example target detection: detecting the camera mobile phone in the semantic graph by adopting a target detection model of a fine tuning single stage to determine whether a mobile phone exists;
fourthly, recognizing the photographing behavior: adopting a classification model to carry out photographing behavior recognition on the semantic graph and determining whether a photographing behavior exists;
and fifthly, post-treatment: and performing fusion analysis on the target detection result and the photographing recognition classification result, performing screen locking and evidence preservation operation if a photographing action occurs, otherwise, returning to the step I, and continuously processing subsequent video frame images.
In the second step, the foreground image of the object of interest only includes the person and the mobile phone.
In the step I, the shot work station area monitoring picture is taken as a main object of subsequent processing.
The step II comprises the following steps:
(2.1) extracting a video frame image mask: image segmentation model U with input size of 320 × 3 2 Net extracts the mask of the video frame image, the mask is a gray scale image without background information, and the pixel values of the foreground and the background are respectively represented by 255 and 0;
(2.2) converting the gray mask image into a color mask image: carrying out difference operation on the gray mask image and the original image to obtain a color mask image;
(2.3) generating a semantic graph: and respectively carrying out pixel value replacement on the color mask image and the original image in RGB three channels to obtain a semantic image of the original video frame image, wherein the semantic image only retains the interested object.
In the step (2.3), the pixel values are replaced by:
and replacing the background pixel value in the color mask image with 255, and replacing the foreground pixel value with the corresponding pixel value of the original image.
In the third step, a fine-tuning single-stage YOLO-v5 target detection model is adopted to perform target detection on the semantic graph of the video frame image to be recognized, and the detected result of the interested target cell-phone is used as prior information of a subsequent photographing behavior recognition stage.
The YOLO-v5 target detection model is obtained after fine tuning training on a self-built target detection data set phone-detection, the input size of the model is 640 x 3, and the number of training rounds is 200.
The self-built target detection data set comprises:
and extracting images labeled as person, cell-phone and cup from the common data set of coco2017 to generate a sub-coco17 subset, and carrying out manual labeling on the collected video frame image data by the type of person and cell-phone and then uniformly fusing the images.
In the fourth step, the constructed photo-rec-model for recognizing the photographing behavior of the semantic graph of the video frame image to be recognized is adopted, the recognition result and the input prior information are comprehensively distinguished, and the recognition result and the input prior information are output to a subsequent anti-photographing post-processing stage.
The photo-rec-model is obtained by taking Resnet-Inceptionv2 as a basic model, adding a two-classification model consisting of a pooling layer, a convolution layer, a Dropout layer, a full-connection layer and an output layer, and performing fine tuning training on a self-built photo-rec data set, wherein the model inputs RGB images with the size of 229 x 3, and the number of training rounds is 150;
the self-built photographing data set comprises the following steps:
the method comprises the steps of adopting a USB plug-and-play camera to collect video frame image data of three situations, namely using a mobile phone, not using the mobile phone and photographing by the mobile phone in an office work station scene, and forming non-photographing and photographing according to two types.
Examples
As shown in fig. 1 to 3, a novel screen anti-photographing behavior recognition method includes the following steps:
acquiring an image to be identified: acquiring a video frame image from a USB plug-and-play camera arranged at the top end in the middle of a screen;
generating a semantic graph: acquiring a video frame image to be identified, and removing background information in the image by adopting an end-to-end image segmentation model to obtain a foreground image only retaining an interested object;
example target detection: detecting the camera phone in the semantic graph by adopting a target detection model in a fine adjustment single stage, wherein the result is used as prior information in a subsequent identification stage;
fourthly, recognizing the photographing behavior: adopting a classification model to carry out photographing behavior recognition on the semantic graph;
and fifthly, post-treatment: and performing fusion analysis on the target detection result and the photographing recognition classification result, performing screen locking and evidence preservation operation if a photographing action occurs, and otherwise, continuously processing the subsequent video frame image.
The method comprises the following specific implementation steps:
1. acquiring an image to be identified: the method comprises the steps of collecting video frame images from a USB plug-and-play camera arranged at the top end in the middle of a screen, and taking a station area monitoring picture shot by the camera as a main object of subsequent processing.
2. And a semantic graph generation stage:
1) extracting a video frame image mask: extracting a mask of the video frame image by adopting an image segmentation model U2Net with the input size of 320 x 3, wherein the mask is a gray scale image with background information removed, and the pixel values of the foreground and the background are respectively represented by 255 and 0;
2) converting a gray mask image into a color mask image: carrying out difference operation on the gray mask image and the original image to obtain a color mask image;
3) generating a semantic graph: and respectively carrying out pixel value replacement on the color mask image and the original image in RGB three channels, namely replacing the background pixel value in the color mask image with 255 and replacing the foreground pixel value with the pixel value corresponding to the original image, thereby obtaining the semantic graph of the original video frame image, wherein only the interested object is reserved.
3. Example target detection phase:
and performing target detection on a semantic graph of a video frame image to be recognized by adopting a fine-tuning single-stage YOLO-v5 target detection model, and taking a detected result of the interested target cell-phone as prior information of a subsequent photographing behavior recognition stage.
Wherein, the YOLO-v5 target detection model is obtained by fine tuning training on a self-built target detection data set phone-detection, the input size of the model is 640 x 3, and the number of training rounds is 200; the self-built target detection data set is formed by extracting images labeled as person, cell-phone and cup from a coco2017 public data set to generate a sub-coco17 subset and carrying out manual labeling on collected video frame image data in the category of person and cell-phone and then carrying out unified fusion.
4. A photographing behavior identification stage:
and carrying out photographing behavior recognition on the semantic graph of the video frame image to be recognized by adopting the constructed photo-rec-model, comprehensively distinguishing the recognition result and the input prior information, and outputting to a subsequent anti-photographing post-processing stage, otherwise, continuously processing the next video frame image.
The photo-rec-model is obtained by taking Resnet-Inceptionv2 as a basic model, adding a two-classification model consisting of a pooling layer, a convolution layer, a Dropout layer, a full-connection layer and an output layer, and performing fine tuning training on a self-built photo-rec data set, wherein the model inputs an RGB image with the size of 229 3 and the number of training rounds is 150; the self-built photographing data set is formed by non-photographing and photographing (photographing) according to three conditions of using a mobile phone, not using the mobile phone and photographing by the mobile phone in a state of acquiring video frame image data in an office station scene by using a USB plug-and-play camera.
5. And (3) post-processing stage of photographing prevention:
and according to the photographing identification result, if the photographing behavior is judged to occur, locking the screen, marking the judgment result on the video frame image to be identified as evidence to be stored, and then continuously processing the next video frame image.
The method of the present invention, which can be deployed based on a software system or integrated in hardware through program instructions, is easily understood and implemented by those skilled in the art.
The semantic graph generation method, the target detection model or the photographing recognition classification model according to the present invention is not limited to those described herein, and may cover any relevant modifications or adaptive changes, and follow the general principles or conventional technical means of the present invention.

Claims (10)

1. A novel screen anti-photographing recognition method is characterized in that: the method comprises the following steps:
acquiring an image to be identified: collecting video frame images from a USB plug-and-play camera installed on a screen;
generating a semantic graph: acquiring a video frame image to be identified, and removing background information in the image by adopting an end-to-end image segmentation model to obtain a foreground image only retaining an interested object;
example target detection: detecting the camera mobile phone in the semantic graph by adopting a target detection model of a fine tuning single stage to determine whether a mobile phone exists;
fourthly, recognizing the photographing behavior: adopting a classification model to carry out photographing behavior recognition on the semantic graph and determining whether a photographing behavior exists;
and fifthly, post-treatment: and performing fusion analysis on the target detection result and the photographing recognition classification result, performing screen locking and evidence preservation operation if a photographing action occurs, otherwise, returning to the step I, and continuously processing subsequent video frame images.
2. The novel screen shot-proofing recognition method of claim 1, characterized in that: in the second step, the foreground image of the object of interest only includes the person and the mobile phone.
3. The novel screen shot-proofing recognition method of claim 1, characterized in that: in the step I, the shot work station area monitoring picture is taken as a main object of subsequent processing.
4. The novel screen shot-proofing recognition method of claim 1, characterized in that: the step II comprises the following steps:
(2.1) extracting a video frame image mask: image segmentation model U with input size of 320 × 3 2 Net extracts the mask of the video frame image, the mask is a gray scale image without background information, and the pixel values of the foreground and the background are respectively represented by 255 and 0;
(2.2) converting the gray mask image into a color mask image: carrying out difference operation on the gray mask image and the original image to obtain a color mask image;
(2.3) generating a semantic graph: and respectively carrying out pixel value replacement on the color mask image and the original image in RGB three channels to obtain a semantic image of the original video frame image, wherein the semantic image only retains the interested object.
5. The novel screen shot-proofing recognition method of claim 4, characterized in that: in the step (2.3), the pixel values are replaced by:
and replacing the background pixel value in the color mask image with 255, and replacing the foreground pixel value with the corresponding pixel value of the original image.
6. The novel screen shot-proofing recognition method of claim 1, characterized in that: in the third step, a fine-tuning single-stage YOLO-v5 target detection model is adopted to perform target detection on the semantic graph of the video frame image to be recognized, and the detected result of the interested target cell-phone is used as prior information of a subsequent photographing behavior recognition stage.
7. The novel screen shot-proofing recognition method of claim 6, characterized in that: the YOLO-v5 target detection model is obtained after fine tuning training on a self-built target detection data set phone-detection, the input size of the model is 640 x 3, and the number of training rounds is 200.
8. The novel screen shot-proofing recognition method of claim 7, characterized in that: the self-built target detection data set comprises:
and extracting images labeled as person, cell-phone and cup from the common data set of coco2017 to generate a sub-coco17 subset, and carrying out manual labeling on the collected video frame image data by the type of person and cell-phone and then uniformly fusing the images.
9. The novel screen shot-proofing recognition method of claim 1, characterized in that: in the fourth step, the constructed photo-rec-model for recognizing the photographing behavior of the semantic graph of the video frame image to be recognized is adopted, the recognition result and the input prior information are comprehensively distinguished, and the recognition result and the input prior information are output to a subsequent anti-photographing post-processing stage.
10. The novel screen shot-proofing recognition method of claim 9, characterized in that: the photo-rec-model is obtained by taking Resnet-Inceptionv2 as a basic model, adding a two-classification model consisting of a pooling layer, a convolution layer, a Dropout layer, a full-connection layer and an output layer, and performing fine tuning training on a self-built photo-rec data set, wherein the model inputs RGB images with the size of 229 x 3, and the number of training rounds is 150;
the self-built photographing data set comprises the following steps:
the method comprises the steps of adopting a USB plug-and-play camera to collect video frame image data of three situations, namely using a mobile phone, not using the mobile phone and photographing by the mobile phone in an office work station scene, and forming non-photographing and photographing according to two types.
CN202210493502.7A 2022-05-07 2022-05-07 Novel screen anti-photographing recognition method Pending CN114863337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210493502.7A CN114863337A (en) 2022-05-07 2022-05-07 Novel screen anti-photographing recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210493502.7A CN114863337A (en) 2022-05-07 2022-05-07 Novel screen anti-photographing recognition method

Publications (1)

Publication Number Publication Date
CN114863337A true CN114863337A (en) 2022-08-05

Family

ID=82634551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210493502.7A Pending CN114863337A (en) 2022-05-07 2022-05-07 Novel screen anti-photographing recognition method

Country Status (1)

Country Link
CN (1) CN114863337A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116343099A (en) * 2023-05-26 2023-06-27 东莞市金铠计算机科技有限公司 Computer screen information anti-theft system based on machine vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116343099A (en) * 2023-05-26 2023-06-27 东莞市金铠计算机科技有限公司 Computer screen information anti-theft system based on machine vision
CN116343099B (en) * 2023-05-26 2023-07-25 东莞市金铠计算机科技有限公司 Computer screen information anti-theft system based on machine vision

Similar Documents

Publication Publication Date Title
CN102833478B (en) Fault-tolerant background model
Jung Efficient background subtraction and shadow removal for monochromatic video sequences
Mushtaq et al. Digital image forgeries and passive image authentication techniques: a survey
Kashyap et al. An evaluation of digital image forgery detection approaches
CN108154080B (en) Method for quickly tracing to source of video equipment
CN111368635B (en) Millimeter wave-based multi-person gait recognition method and device
CN1849613A (en) Apparatus and method for feature recognition
Zhang et al. License plate localization in unconstrained scenes using a two-stage CNN-RNN
Su et al. A novel forgery detection algorithm for video foreground removal
KR20170015639A (en) Personal Identification System And Method By Face Recognition In Digital Image
CN114332744B (en) Transformer substation self-adaptive security method and system based on machine vision
CN111242077A (en) Figure tracking method, system and server
CN114885119A (en) Intelligent monitoring alarm system and method based on computer vision
CN115273208A (en) Track generation method, system and device and electronic equipment
CN114863337A (en) Novel screen anti-photographing recognition method
CN111429376A (en) High-efficiency digital image processing method with high-precision and low-precision integration
CN113537050A (en) Dynamic face recognition algorithm based on local image enhancement
KR101547255B1 (en) Object-based Searching Method for Intelligent Surveillance System
Ma et al. Multi-perspective dynamic features for cross-database face presentation attack detection
CN114140674B (en) Electronic evidence availability identification method combined with image processing and data mining technology
CN112699810B (en) Method and device for improving character recognition precision of indoor monitoring system
González et al. Towards refining ID cards presentation attack detection systems using face quality index
CN111985331B (en) Detection method and device for preventing trade secret from being stolen
CN112907206B (en) Business auditing method, device and equipment based on video object identification
CN110572618B (en) Illegal photographing behavior monitoring method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination