CN109409315B - Method and system for detecting remnants in panel area of ATM (automatic Teller machine) - Google Patents

Method and system for detecting remnants in panel area of ATM (automatic Teller machine) Download PDF

Info

Publication number
CN109409315B
CN109409315B CN201811319006.XA CN201811319006A CN109409315B CN 109409315 B CN109409315 B CN 109409315B CN 201811319006 A CN201811319006 A CN 201811319006A CN 109409315 B CN109409315 B CN 109409315B
Authority
CN
China
Prior art keywords
image
pedestrian
area
video stream
leaving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811319006.XA
Other languages
Chinese (zh)
Other versions
CN109409315A (en
Inventor
李代远
陈吉宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haoyun Technologies Co Ltd
Original Assignee
Haoyun Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haoyun Technologies Co Ltd filed Critical Haoyun Technologies Co Ltd
Priority to CN201811319006.XA priority Critical patent/CN109409315B/en
Publication of CN109409315A publication Critical patent/CN109409315A/en
Application granted granted Critical
Publication of CN109409315B publication Critical patent/CN109409315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a method and a system for detecting remnants in a panel area of an ATM (automatic Teller machine), wherein the method comprises the following steps: step S1, acquiring a video stream of the ATM region, processing the acquired video stream by using a background modeling method, and detecting the entering or leaving of pedestrians in the video stream; step S2, respectively intercepting a frame of image when the pedestrian entering and the pedestrian leaving are detected, carrying out difference processing on the pedestrian entering image and the pedestrian leaving image, and extracting the gray level change area of the image; step S3, segmenting the convolutional neural network model after a training of the pedestrian leaving image, and extracting a suspected carry-over area; step S4, calculating the overlapping degree of the gray-scale change area extracted in step S2 and the suspected leave-behind area extracted in step S3 to determine whether the leave-behind object exists on the panel.

Description

Method and system for detecting remnants in panel area of ATM (automatic Teller machine)
Technical Field
The invention relates to the technical field of image processing and intelligent video monitoring, in particular to an ATM panel area remnant detection method and system based on a deep convolutional neural network.
Background
Currently, an Automatic Teller Machine (ATM) has become an indispensable part in people's life, and each bank is provided with a large number of ATMs. However, the ATM brings convenience to people's life and brings troubles, such as forgetting articles and the like. Usually, the bank will install the camera additional above the ATM, also carry out the monitoring of legacy when monitoring financial crime. Therefore, panel remaining object detection and analysis is one of the important components of the ATM security system, and the main function of the panel remaining object detection and analysis is to detect whether a user's remaining object exists in the panel area of the ATM and whether an illegal attached or mounted object exists in the panel area of the ATM through a series of algorithm judgment after the user leaves the ATM.
At present, the existing atm panel remaining object detection algorithm is realized based on the traditional digital image analysis technology, and mainly judges whether a remaining object exists or not by comparing the gray levels of panel images before and after pedestrians enter and exit and judging the gray level change degree of the image of a panel area, and the method is simple and direct, but has the following problems because the method is sensitive to the change of the image area: 1. judgment errors caused by environmental changes or other factors, such as indoor and outdoor environments, changes of illumination intensity and direction, and irregular entrance and exit of customers, can cause the alarm of the left-over object by mistake; 2. a carry-over alarm may also be triggered when a carry-over object "disappears" from the panel area.
Therefore, there is a need to provide a technical solution to solve the above-mentioned problem of detecting the panel carry-over of the ATM.
Disclosure of Invention
To overcome the above-mentioned deficiencies of the prior art, the present invention provides a method and a system for detecting the remaining objects in the panel area of an ATM, so as to accurately detect the remaining objects in the panel area even under the condition of environmental changes, such as disturbance of lighting conditions.
In order to achieve the above and other objects, the present invention provides a method for detecting a remnant in a panel area of an ATM, comprising the steps of:
step S1, acquiring a video stream of the ATM region, processing the acquired video stream by using a background modeling method, and detecting the entering or leaving of pedestrians in the video stream;
step S2, respectively intercepting a frame of image when the pedestrian entering and the pedestrian leaving are detected, carrying out difference processing on the pedestrian entering image and the pedestrian leaving image, and extracting the gray level change area of the image;
step S3, segmenting the convolutional neural network model after a training of the pedestrian leaving image, and extracting a suspected carry-over area;
in step S4, the grayscale variation region extracted in step S2 and the suspected carry-over region extracted in step S3 are analyzed and calculated for overlap to determine whether carry-over exists in the panel.
Preferably, the step S1 further includes:
step S100, acquiring a video stream;
s101, constructing a background model by using the video stream acquired in the S100 and adopting a Gaussian background modeling method;
step S102, sequentially acquiring a frame of image from a video stream, and acquiring a moving foreground image by using the background model constructed in the step S101;
and step S103, confirming the entering or leaving of the pedestrian according to the obtained motion foreground image.
Preferably, in step S103, if the moving foreground appears in the pedestrian area, it is determined that a pedestrian enters, and if the moving foreground disappears, it is determined that the pedestrian leaves.
Preferably, the step S2 further includes:
step S200, respectively obtaining the gray value of each pixel point of the image where the pedestrian enters and the gray value of each pixel point of the image where the pedestrian leaves;
step S201, the gray value of each pixel point of the image where the pedestrian leaves is different from the gray value of the corresponding pixel point of the image where the pedestrian enters, and the absolute value of the difference is taken to obtain a difference image.
Preferably, the step S3 further includes:
and S300, building a convolutional neural network model, inputting an image manually marked with the left object region, automatically adjusting model parameters, training the model, and generating a mathematical model capable of extracting the panel left object region.
And S301, inputting the image of the pedestrian leaving the image into the convolutional neural network model, segmenting the image, and extracting a suspected carry-over area.
Preferably, in step S4, if the overlapping degree of the gray-scale variation area and the suspected remaining area is greater than a preset threshold, it is determined that a remaining object exists on the panel.
In order to achieve the above object, the present invention further provides a system for detecting a remnant in a panel area of an ATM, including:
the background modeling unit is used for acquiring a video stream of an ATM region, processing the acquired video stream by using a background modeling method and detecting the entering or leaving of a pedestrian;
the image difference processing unit is used for respectively intercepting a frame of image when the pedestrian entering and the pedestrian leaving are detected, carrying out difference processing on the pedestrian entering image and the pedestrian leaving image and extracting a gray level change area of the image;
the deep learning unit is used for segmenting a trained convolutional neural network model of the pedestrian leaving image and extracting a suspected carry-over area;
and the overlapping degree analysis unit is used for calculating the overlapping degree of the gray scale change area extracted by the image difference processing unit and the suspected remaining area extracted by the deep learning unit so as to determine whether a remaining object exists on the panel.
Preferably, the background modeling unit includes:
a video stream acquisition unit for acquiring a video stream;
the background model building unit is used for building a background model by using the video stream acquired by the video stream acquiring unit and adopting a Gaussian background modeling method;
the motion foreground acquiring unit is used for sequentially acquiring a frame of image from the video stream and acquiring a motion foreground image by using the background model constructed by the background model constructing unit;
and the state detection unit is used for confirming the entering or leaving of the pedestrian according to the obtained motion foreground image.
Preferably, the image difference processing unit further includes:
the gray value acquisition module is used for respectively acquiring the gray value of each pixel point of the image where the pedestrian enters and the gray value of each pixel point of the image where the pedestrian leaves;
and the difference processing module is used for making a difference between the gray value of each pixel point of the image where the pedestrian leaves and the gray value of the corresponding pixel point of the image where the pedestrian enters, and taking an absolute value of the difference to obtain a difference image, namely the corresponding gray change area.
Preferably, the deep learning unit further includes:
the convolutional neural network construction unit is used for constructing a convolutional neural network model, automatically adjusting model parameters after inputting an image manually marked with a left object region, and generating a mathematical model capable of extracting the panel left object region after training the model;
and the image segmentation unit is used for inputting the images of pedestrians leaving the convolutional neural network model, segmenting the images and extracting the suspected carry-over areas.
Compared with the prior art, the method and the system for detecting the panel area remnants of the ATM can accurately detect the remnants in the panel area under the condition of illumination condition interference by combining suspected remnants extraction and gray level change area comparison.
Drawings
FIG. 1 is a flow chart of the steps of a method for detecting carryover in a panel area of an ATM;
FIG. 2 is a schematic diagram of a convolutional neural network model according to an embodiment of the present invention;
FIG. 3 is a system architecture diagram of an ATM panel area debris detection system in accordance with the present invention;
FIG. 4 is a diagram illustrating a process of detecting the presence of an object left in a panel area of an ATM according to an embodiment of the present invention.
Detailed Description
Other advantages and capabilities of the present invention will be readily apparent to those skilled in the art from the present disclosure by describing the embodiments of the present invention with specific embodiments thereof in conjunction with the accompanying drawings. The invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention.
FIG. 1 is a flow chart of the steps of a method for detecting the remains in the panel area of an ATM. As shown in fig. 1, the method for detecting the remnant of the panel area of the ATM includes the following steps:
and step S1, acquiring a video stream of the ATM region, processing the acquired image by using a background modeling method, and detecting the entering or leaving of the pedestrian.
In step S1, in the embodiment of the present invention, a gaussian background modeling method is used to detect a moving object in the obtained image, and if a moving foreground appears in a pedestrian area (which can be set), it is determined that a pedestrian enters, and if the foreground disappears, it is determined that the pedestrian leaves. In an embodiment of the present invention, the video stream is acquired by a camera disposed at the top of the ATM area, and specifically, the step S1 further includes:
step S100, acquiring a video stream;
s101, constructing a background model by using the video stream acquired in the S100 and adopting a Gaussian background modeling method;
step S102, sequentially acquiring a frame of image from a video stream, and acquiring a moving foreground image by using the background model constructed in step S101;
and step S103, confirming the entering or leaving of the pedestrian according to the obtained motion foreground image. And if the moving foreground appears in a pedestrian area (which can be set), judging that a pedestrian enters, and if the moving foreground disappears, judging that the pedestrian leaves.
And step S2, respectively capturing one frame of image when the pedestrian entering and the pedestrian leaving are detected, and carrying out difference processing on the pedestrian entering image and the pedestrian leaving image to extract the gray level change area of the image. That is, two frames of images of the pedestrian entering image and the pedestrian leaving image are classified to obtain a classified image, namely a gray level change area. Specifically, step S2 further includes:
step S200, respectively obtaining the gray value of each pixel point of the image where the pedestrian enters and the gray value of each pixel point of the image where the pedestrian leaves;
step S201, subtracting the gray value of each pixel point of the image where the pedestrian leaves from the gray value of the corresponding pixel point of the image where the pedestrian enters, and taking the absolute value of the difference to obtain a difference image, that is, to obtain the corresponding gray-scale change region.
And step S3, segmenting the convolutional neural network model after the pedestrian leaves the image and is trained, and extracting a suspected carry-over area.
Specifically, step S3 further includes:
and S300, building a convolutional neural network model, inputting an image manually marked with the left object region, automatically adjusting model parameters, training the model, and generating a mathematical model capable of extracting the panel left object region. Fig. 2 is a schematic structural diagram of a convolutional neural network model built in the embodiment of the present invention. The original image is input into a 14-layer MobileNet, a 1 × 1 × 512 feature map is output, the original image is restored to 1/4 size of the original image through Upsampling (Upsampling), the original image is spliced with a 4 × feature map and an 8 × feature map output by the MobileNet in the channel direction, and finally, the original image segmentation result is output after calculation through a subsequent convolution layer and an Upsampling layer. Since the training of the convolutional neural network is performed in the prior art, it is not described herein in detail.
And S301, inputting the image of the pedestrian leaving the image into the convolutional neural network model, segmenting the image, and extracting a suspected carry-over area.
Step S4, calculating the overlapping degree of the gray-scale change area extracted in step S2 and the suspected leave-behind area extracted in step S3 to determine whether the panel has a leave-behind object, and if the overlapping degree of the two areas is large (for example, greater than a preset threshold), determining that the panel has a leave-behind object, and pushing an alarm.
Fig. 3 is a system architecture diagram of a system for detecting the existence of objects in the panel area of an ATM according to the present invention. As shown in fig. 3, the method for detecting the remnant of the panel area of the ATM according to the present invention includes the following steps:
the background modeling unit 201 is configured to obtain a video stream of the ATM region, process the obtained video stream by using a background modeling method, and detect entering or leaving of a pedestrian.
In the embodiment of the present invention, the background modeling unit 201 detects a moving object in an obtained image of a video stream by using a gaussian background modeling method, and determines that a pedestrian enters if a moving foreground appears in a pedestrian area (which can be set), and determines that a pedestrian leaves if the foreground disappears. Specifically, the pedestrian state detection unit 201 further includes:
a video stream acquisition unit for acquiring a video stream;
the background model building unit is used for building a background model by using the video stream acquired by the video stream acquiring unit and adopting a Gaussian background modeling method;
the motion foreground acquiring unit is used for sequentially acquiring a frame of image from the video stream and acquiring a motion foreground image by using the background model constructed by the background model constructing unit;
and the state detection unit is used for confirming the entering or leaving of the pedestrian according to the obtained motion foreground image. And if the moving foreground appears in a pedestrian area (which can be set), judging that a pedestrian enters, and if the moving foreground disappears, judging that the pedestrian leaves.
The image difference processing unit 202 is configured to respectively capture one frame of image when a pedestrian enters and leaves, perform difference processing on the pedestrian entering image and the pedestrian leaving image, and extract a gray level change region of the image. That is, two frames of images of the pedestrian entering image and the pedestrian leaving image are classified to obtain a classified image, namely a gray level change area. Specifically, the image difference processing unit 202 further includes:
the gray value acquisition module is used for respectively acquiring the gray value of each pixel point of the image where the pedestrian enters and the gray value of each pixel point of the image where the pedestrian leaves;
and the difference processing module is used for making a difference between the gray value of each pixel point of the image where the pedestrian leaves and the gray value of the corresponding pixel point of the image where the pedestrian enters, and taking an absolute value of the difference to obtain a difference image, namely the corresponding gray change area.
And the deep learning unit 203 is used for segmenting the convolutional neural network model after a pedestrian leaves the image and is trained, and extracting a suspected residual area.
Specifically, the deep learning unit 203 further includes:
and the convolutional neural network construction unit is used for constructing a convolutional neural network model, automatically adjusting model parameters after inputting the image manually marked with the left object region, and generating a mathematical model capable of extracting the panel left object region after training the model. In the embodiment of the present invention, the convolutional neural network model is shown in fig. 2, which is not repeated herein.
And the image segmentation unit is used for inputting the images of pedestrians leaving the convolutional neural network model, segmenting the images and extracting the suspected carry-over areas.
An overlap degree analysis unit 204, configured to perform overlap degree calculation on the gray scale change area extracted by the image difference processing unit 202 and the suspected leave-behind area extracted by the deep learning unit 203 to determine whether a leave-behind object exists on the panel, and if the overlap degree of the two areas is large, determine that a leave-behind object exists on the panel, and push an alarm.
FIG. 4 is a diagram illustrating a process of detecting the presence of an object left in a panel area of an ATM according to an embodiment of the present invention.
1. Firstly, a deep neural network model is constructed, field marked (manually marked) ATM machine remnant image samples are collected, and the deep neural network model is trained to be capable of distinguishing a panel area, an illumination area and a remnant area in an image;
2. after the image of the ATM panel camera is obtained, the entrance and the exit of the pedestrian are detected through a background modeling method, one image is respectively captured when the pedestrian enters and exits, and the images are sent to an image differential processing unit for processing, namely a solid line flow in FIG. 3, and an image gray level change area is extracted by carrying out differential processing on the front image and the rear image (the pedestrian enters and the pedestrian exits), namely the image gray level change area is considered to be a possible left-behind area; meanwhile, the image of the pedestrian after leaving is segmented through the trained deep neural network model, namely a dotted line flow in fig. 3, and a suspected carry-over area is extracted.
3. And analyzing the overlapping degree of the suspected leave-on area and the gray change area of the panel, and if the overlapping degree of the suspected leave-on area and the gray change area is larger, determining that the leave-on object exists in the panel and pushing an alarm.
In summary, the method and system for detecting the panel area remnant of the ATM in the present invention can accurately detect the remnant of the panel area under the condition of the interference of the illumination condition by combining the suspected remnant extraction and the gray level change area comparison.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Modifications and variations can be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the present invention. Therefore, the scope of the invention should be determined from the following claims.

Claims (10)

1. A method for detecting the remnants in the panel area of an ATM comprises the following steps:
step S1, acquiring a video stream of the ATM region, processing the acquired video stream by using a background modeling method, and detecting the entering or leaving of pedestrians in the video stream;
step S2, respectively intercepting a frame of image when the pedestrian entering and the pedestrian leaving are detected, carrying out difference processing on the pedestrian entering image and the pedestrian leaving image, and extracting the gray level change area of the image;
step S3, segmenting the convolutional neural network model after a training of the pedestrian leaving image, and extracting a suspected carry-over area;
in step S4, the grayscale variation area extracted in step S2 and the suspected carry-over area extracted in step S3 are overlapped to determine whether there is carry-over on the panel.
2. The ATM panel area carry-over detection method according to claim 1, wherein the step S1 further includes:
step S100, acquiring a video stream;
step S101, constructing a background model by using the video stream acquired in the step S100 and adopting a Gaussian background modeling method;
step S102, sequentially acquiring a frame of image from a video stream, and acquiring a moving foreground image by using the background model constructed in the step S101;
and step S103, confirming the entering or leaving of the pedestrian according to the obtained motion foreground image.
3. An ATM panel area carryover detection method according to claim 2, wherein: in step S103, it is determined that a pedestrian enters if the moving foreground appears in the pedestrian area, and it is determined that the pedestrian leaves if the moving foreground disappears.
4. The ATM panel area carry-over detection method according to claim 1, wherein the step S2 further includes:
step S200, respectively obtaining the gray value of each pixel point of the image where the pedestrian enters and the gray value of each pixel point of the image where the pedestrian leaves;
step S201, the gray value of each pixel point of the image where the pedestrian leaves is different from the gray value of the corresponding pixel point of the image where the pedestrian enters, and the absolute value of the difference is taken to obtain a difference image.
5. The ATM panel area carry-over detection method according to claim 1, wherein the step S3 further includes:
step S300, building a convolutional neural network model, inputting an image manually marked with a left object region, automatically adjusting model parameters, training the model, and generating a mathematical model capable of extracting the panel left object region;
and S301, inputting the image of the pedestrian leaving the image into the convolutional neural network model, segmenting the image, and extracting a suspected carry-over area.
6. The ATM panel area carryover detection method of claim 1, wherein: in step S4, if the overlapping degree of the gray-scale variation area and the suspected remaining area is greater than the predetermined threshold, it is determined that a remaining object exists on the panel.
7. An ATM panel area carryover detection system comprising:
the background modeling unit is used for acquiring a video stream of an ATM region, processing the acquired video stream by using a background modeling method and detecting the entering or leaving of a pedestrian;
the image difference processing unit is used for respectively intercepting a frame of image when the pedestrian entering and the pedestrian leaving are detected, carrying out difference processing on the pedestrian entering image and the pedestrian leaving image and extracting a gray level change area of the image;
the deep learning unit is used for segmenting a trained convolutional neural network model of the pedestrian leaving image and extracting a suspected carry-over area;
and the overlapping degree analysis unit is used for calculating the overlapping degree of the gray scale change area extracted by the image difference processing unit and the suspected remaining area extracted by the deep learning unit so as to determine whether a remaining object exists on the panel.
8. The ATM panel zone carryover detection system of claim 7, wherein the background modeling unit comprises:
a video stream acquisition unit for acquiring a video stream;
the background model building unit is used for building a background model by using the video stream acquired by the video stream acquiring unit and adopting a Gaussian background modeling method;
the motion foreground acquiring unit is used for sequentially acquiring a frame of image from the video stream and acquiring a motion foreground image by using the background model constructed by the background model constructing unit;
and the state detection unit is used for confirming the entering or leaving of the pedestrian according to the obtained motion foreground image.
9. The ATM panel area carryover detection system of claim 7, wherein the image differencing processing unit further comprises:
the gray value acquisition module is used for respectively acquiring the gray value of each pixel point of the image where the pedestrian enters and the gray value of each pixel point of the image where the pedestrian leaves;
and the difference processing module is used for making a difference between the gray value of each pixel point of the image where the pedestrian leaves and the gray value of the corresponding pixel point of the image where the pedestrian enters, and taking an absolute value of the difference to obtain a difference image, namely the corresponding gray change area.
10. The ATM panel zone carryover detection system of claim 7, wherein the deep learning unit further comprises:
the convolutional neural network construction unit is used for constructing a convolutional neural network model, automatically adjusting model parameters after inputting an image manually marked with a left object region, and generating a mathematical model capable of extracting the panel left object region after training the model;
and the image segmentation unit is used for inputting the images of pedestrians leaving the convolutional neural network model, segmenting the images and extracting the suspected carry-over areas.
CN201811319006.XA 2018-11-07 2018-11-07 Method and system for detecting remnants in panel area of ATM (automatic Teller machine) Active CN109409315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811319006.XA CN109409315B (en) 2018-11-07 2018-11-07 Method and system for detecting remnants in panel area of ATM (automatic Teller machine)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811319006.XA CN109409315B (en) 2018-11-07 2018-11-07 Method and system for detecting remnants in panel area of ATM (automatic Teller machine)

Publications (2)

Publication Number Publication Date
CN109409315A CN109409315A (en) 2019-03-01
CN109409315B true CN109409315B (en) 2022-01-11

Family

ID=65472088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811319006.XA Active CN109409315B (en) 2018-11-07 2018-11-07 Method and system for detecting remnants in panel area of ATM (automatic Teller machine)

Country Status (1)

Country Link
CN (1) CN109409315B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399792B (en) * 2019-06-19 2023-07-25 河南北斗卫星导航平台有限公司 Monitoring method, monitoring device and computer readable storage medium
CN110956102A (en) * 2019-11-19 2020-04-03 上海眼控科技股份有限公司 Bank counter monitoring method and device, computer equipment and storage medium
CN111091097A (en) * 2019-12-20 2020-05-01 ***通信集团江苏有限公司 Method, device, equipment and storage medium for identifying remnants
CN111259955B (en) * 2020-01-15 2023-12-08 国家测绘产品质量检验测试中心 Reliable quality inspection method and system for geographical national condition monitoring result
CN111932525B (en) * 2020-08-19 2024-02-23 中国银行股份有限公司 Method and device for detecting left-over real object of real object delivery port of bank equipment
CN112837326B (en) * 2021-01-27 2024-04-09 南京中兴力维软件有限公司 Method, device and equipment for detecting carryover
CN117036482B (en) * 2023-08-22 2024-06-14 北京智芯微电子科技有限公司 Target object positioning method, device, shooting equipment, chip, equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426128A (en) * 2007-10-30 2009-05-06 三星电子株式会社 Detection system and method for stolen and lost packet
CN102034240A (en) * 2010-12-23 2011-04-27 北京邮电大学 Method for detecting and tracking static foreground
CN102413321A (en) * 2011-12-26 2012-04-11 浙江省电力公司 Automatic image-recording system and method
CN103679123A (en) * 2012-09-17 2014-03-26 浙江大华技术股份有限公司 Method and system for detecting remnant on ATM (Automatic Teller Machine) panel
CN104156942A (en) * 2014-07-02 2014-11-19 华南理工大学 Detection method for remnants in complex environment
CN105678803A (en) * 2015-12-29 2016-06-15 南京理工大学 Video monitoring target detection method based on W4 algorithm and frame difference
CN106296677A (en) * 2016-08-03 2017-01-04 浙江理工大学 A kind of remnant object detection method of double mask context updates based on double-background model
CN106372576A (en) * 2016-08-23 2017-02-01 南京邮电大学 Deep learning-based intelligent indoor intrusion detection method and system
CN106469311A (en) * 2015-08-19 2017-03-01 南京新索奇科技有限公司 Object detection method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426128A (en) * 2007-10-30 2009-05-06 三星电子株式会社 Detection system and method for stolen and lost packet
CN102034240A (en) * 2010-12-23 2011-04-27 北京邮电大学 Method for detecting and tracking static foreground
CN102413321A (en) * 2011-12-26 2012-04-11 浙江省电力公司 Automatic image-recording system and method
CN103679123A (en) * 2012-09-17 2014-03-26 浙江大华技术股份有限公司 Method and system for detecting remnant on ATM (Automatic Teller Machine) panel
CN104156942A (en) * 2014-07-02 2014-11-19 华南理工大学 Detection method for remnants in complex environment
CN106469311A (en) * 2015-08-19 2017-03-01 南京新索奇科技有限公司 Object detection method and device
CN105678803A (en) * 2015-12-29 2016-06-15 南京理工大学 Video monitoring target detection method based on W4 algorithm and frame difference
CN106296677A (en) * 2016-08-03 2017-01-04 浙江理工大学 A kind of remnant object detection method of double mask context updates based on double-background model
CN106372576A (en) * 2016-08-23 2017-02-01 南京邮电大学 Deep learning-based intelligent indoor intrusion detection method and system

Also Published As

Publication number Publication date
CN109409315A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109409315B (en) Method and system for detecting remnants in panel area of ATM (automatic Teller machine)
KR100831122B1 (en) Face authentication apparatus, face authentication method, and entrance and exit management apparatus
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
CN111553265B (en) Method and system for detecting internal defects of drainage pipeline
JP4533836B2 (en) Fluctuating region detection apparatus and method
WO2015085811A1 (en) Method and device for banknote identification based on thickness signal identification
CN108804987B (en) Door opening and closing state detection method and device and people flow detection system
CN110589647A (en) Method for real-time fault detection and prediction of elevator door through monitoring
US10599946B2 (en) System and method for detecting change using ontology based saliency
KR20120129301A (en) Method and apparatus for extracting and tracking moving objects
JP2003051076A (en) Device for monitoring intrusion
Santos et al. Car recognition based on back lights and rear view features
CN111626104B (en) Cable hidden trouble point detection method and device based on unmanned aerial vehicle infrared thermal image
WO2012081969A1 (en) A system and method to detect intrusion event
CN115661475A (en) Image foreign matter identification method, device, equipment and storage medium
CN111368726B (en) Construction site operation face personnel number statistics method, system, storage medium and device
CN111191575B (en) Naked flame detection method and system based on flame jumping modeling
EP0832472A1 (en) Security control system
JPH05300516A (en) Animation processor
CN114694090A (en) Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5
Mantini et al. A signal detection theory approach for camera tamper detection
KR100338473B1 (en) Face detection method using multi-dimensional neural network and device for the same
CN116416565A (en) Method and system for detecting pedestrian trailing and crossing in specific area
CN113247720A (en) Intelligent elevator control method and system based on video
CN106250859B (en) The video flame detecting method spent in a jumble is moved based on characteristic vector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant