CN112541410A - Method and device for detecting national treasury personnel behavior specifications - Google Patents

Method and device for detecting national treasury personnel behavior specifications Download PDF

Info

Publication number
CN112541410A
CN112541410A CN202011380286.2A CN202011380286A CN112541410A CN 112541410 A CN112541410 A CN 112541410A CN 202011380286 A CN202011380286 A CN 202011380286A CN 112541410 A CN112541410 A CN 112541410A
Authority
CN
China
Prior art keywords
video stream
stream data
live
area
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011380286.2A
Other languages
Chinese (zh)
Other versions
CN112541410B (en
Inventor
高伟
张磊
郭锐鹏
钟春彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202011380286.2A priority Critical patent/CN112541410B/en
Publication of CN112541410A publication Critical patent/CN112541410A/en
Application granted granted Critical
Publication of CN112541410B publication Critical patent/CN112541410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting national treasury personnel behavior specifications, which can be used in the financial field or other technical fields, and the method comprises the following steps: calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address; determining a core area in live-action video stream data according to the acquired behavior detection name; intercepting video stream data of the core area and comparing the video stream data with prestored data information to obtain a comparison result; and judging whether the violation behavior exists according to the comparison result. According to the method and the system, real-time pictures are obtained by using cameras in all areas of the vault, the cameras are accessed to and deployed in local computing nodes for processing, the non-compliant behaviors of the vault are detected by using a computer vision method, early warning is timely given when the non-compliant behaviors occur, a bank is assisted to supervise, and business risks are effectively prevented.

Description

Method and device for detecting national treasury personnel behavior specifications
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a behavior specification detection method and device for a vault worker.
Background
The vault is a place where banks keep, transfer and dial out foreign currency, cash, precious metals, valuable documents, blank important certificates, other valuable articles and other real objects and cash business centralized operation. Most of the prior exchequer operations are mainly performed manually, and the contacts between people and real objects are more, so that the possibility of occurrence of moral risks is high. In order to improve the risk management and control level, a vault manager formulates a corresponding post operation specification, but whether the operation specification is strictly executed or not is mainly checked by depending on the subjective performance of staff and a mode that the manager checks historical videos manually, and the checking mode has the advantages of large workload, low efficiency and lag detection timeliness and cannot meet the requirement of operation development of the vault of a bank in a new situation.
Disclosure of Invention
The application provides a behavior specification method and a behavior specification device, which are used for at least solving the problem that the operation specification of a vault worker is lack of supervision at present.
According to an aspect of the present application, there is provided a behavior specification detection method, including:
calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address;
determining a core area in live-action video stream data according to the acquired behavior detection name;
intercepting video stream data of the core area and comparing the video stream data with prestored data information to obtain a comparison result;
and judging whether the violation behavior exists according to the comparison result.
In an embodiment, calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address includes:
determining a target camera of a to-be-detected area of the exchequer;
searching a stream address corresponding to a target camera from a preset camera stream address;
and calling live-action video stream data shot by the target camera according to the stream address corresponding to the target camera.
In one embodiment, when the behavior detection name is violation entry detection, determining a core area in live-action video stream data according to the obtained behavior detection name includes:
decoding live-action video stream data;
and determining a face position area in the decoded live-action video stream data, and taking the face position area as a core area.
In one embodiment, when the action detection name is point action detection, determining a core area in live-action video stream data according to the acquired action detection name includes:
decoding live-action video stream data;
and determining a hand position area in the decoded live-action video stream data, and taking the hand position area as a core area.
In an embodiment, capturing video stream data of a core area and comparing the captured video stream data with pre-stored data information to obtain a comparison result includes:
intercepting video stream data of a hand position area;
counting the number of point actions in the video stream data of the hand position area;
and comparing the counted number of point actions with a preset value to obtain a comparison result.
In one embodiment, when the behavior detection name is dressing behavior detection, determining a core area in live-action video stream data according to the acquired behavior detection name includes:
decoding live-action video stream data;
and determining a human trunk position area in the decoded live-action video stream data, and taking the human trunk position area as a core area.
In an embodiment, capturing video stream data of a core area and comparing the captured video stream data with pre-stored data information to obtain a comparison result includes:
intercepting video stream data of a human body position area;
and detecting dressing information in the video stream data of the human trunk position area, and comparing the dressing information with prestored dressing image information to obtain a comparison result.
According to another aspect of the present application, there is also provided a behavior specification detecting apparatus, including:
the video stream data calling unit is used for calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address;
the core area locking unit is used for determining a core area in live-action video stream data according to the acquired behavior detection name;
the comparison unit is used for intercepting the video stream data of the core area and comparing the video stream data with prestored data information to obtain a comparison result;
and the behavior judging unit is used for judging whether the violation behavior exists according to the comparison result.
In one embodiment, the video stream data retrieving unit includes:
the target camera determining module is used for determining a target camera of a to-be-detected area of the national treasury;
the stream address searching module is used for searching a stream address corresponding to a target camera from the preset camera stream addresses;
and the video stream data acquisition module is used for calling the live-action video stream data shot by the target camera according to the stream address corresponding to the target camera.
In one embodiment, when the behavior detection name is violation entry detection, the core region locking unit includes:
the first decoding module is used for decoding live-action video stream data;
and the face recognition module is used for determining a face position area in the decoded live-action video stream data and taking the face position area as a core area.
In one embodiment, when the behavior detection name is point action detection, the core region locking unit includes:
the second decoding module is used for decoding the live-action video stream data;
and the hand motion recognition module is used for determining a hand position area in the decoded live-action video stream data and taking the hand position area as a core area.
In one embodiment, the alignment unit includes:
the video intercepting module is used for intercepting video stream data of the hand position area;
the motion frequency counting module is used for counting the number of points in the video stream data of the hand position area;
and the frequency comparison module is used for comparing the counted number of the point actions with a preset value to obtain a comparison result.
In one embodiment, when the behavior detection name is dressing behavior detection, the core region locking unit includes:
the third decoding module is used for decoding the live-action video stream data;
and the trunk locking module is used for determining a human trunk position area in the decoded live-action video stream data and taking the human trunk position area as a core area.
In one embodiment, the alignment unit includes:
the intercepting module is used for intercepting video stream data of the human trunk position area;
and the dressing comparison module is used for detecting dressing information in the video stream data of the human trunk position area and comparing the dressing information with prestored dressing image information to obtain a comparison result.
According to the method and the system, real-time pictures are obtained by using cameras in all areas of the vault, the cameras are accessed to and deployed in local computing nodes for processing, the non-compliant behaviors of the vault are detected by using a computer vision method, early warning is timely given when the non-compliant behaviors occur, a bank is assisted to supervise, and business risks are effectively prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a behavior specification detection method provided in the present application.
Fig. 2 is a flowchart of acquiring live-action video stream data according to an embodiment of the present application.
Fig. 3 is a flowchart of determining a core area in a first case in the embodiment of the present application.
Fig. 4 is a flowchart of determining a core area in a second case in the embodiment of the present application.
Fig. 5 is a flowchart of determining a core area in a third case in the embodiment of the present application.
Fig. 6 is a block diagram of a behavior specification detection apparatus according to the present application.
Fig. 7 is a block diagram of a video stream data retrieving unit in the embodiment of the present application.
Fig. 8 is a block diagram of a core region locking unit in a first case in the embodiment of the present application.
Fig. 9 is a block diagram of a core region locking unit in the second case in the embodiment of the present application.
Fig. 10 is a block diagram of a core region locking unit in a third case in the embodiment of the present application.
FIG. 11 is a general block diagram of a system for detecting behavior specifications of vault workers according to an embodiment of the present application.
Fig. 12 is a specific implementation of an electronic device in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The embodiments of the present invention may be used in the financial field, and may also be used in any technical field other than the financial field, and the present application is not limited thereto.
The vault is an important place in a banking system, most of the vault operations are manual operations currently, workers have very many contacts with cash and valuables, and the possibility of moral risks is high, and in order to supervise the behavior specifications of the workers, the application provides a behavior specification detection method, as shown in fig. 1, which comprises the following steps:
s101: and calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address.
S102: and determining a core area in the live-action video stream data according to the acquired behavior detection name.
S103: and intercepting the video stream data of the core area and comparing the video stream data with the pre-stored data information to obtain a comparison result.
S104: and judging whether the violation behavior exists according to the comparison result.
Firstly, face information registration is carried out on workers who have permission to enter a vault to work, wherein each piece of face information corresponds to a vault area which can have permission to enter, for example, face information of Zhang III is submitted and the vault area which can be entered by Zhang III is registered. The whole vault is divided into a plurality of areas A, B, C and the like, and if the area A in the vault is detected, the area A is the area to be detected of the vault. RTSP streaming addresses of cameras in all areas are configured in the system in advance by vault managers, camera streaming addresses corresponding to the area A (the areas monitored by the cameras are different) are obtained, and then live-action video streaming data of the area A are called. Determining a core area in the live-action video stream data according to the acquired behavior detection name; and intercepting the video stream data of the core area and comparing the video stream data with the pre-stored data information to obtain a comparison result.
In an embodiment, calling corresponding live-action video stream data according to the acquired to-be-detected region of the vault and the configured camera stream address, as shown in fig. 2, includes:
s201: and determining a target camera of the to-be-detected area of the exchequer.
S202: and searching a stream address corresponding to the target camera from the preset camera stream addresses.
S203: and calling live-action video stream data shot by the target camera according to the stream address corresponding to the target camera.
In a specific embodiment, still taking an area a in the vault as an example, the area a is an area to be detected in the vault, a target camera 1 in the area a is obtained, the camera 1 continuously shoots a dynamic video in the area a, and then a stream address corresponding to the camera 1 is searched from a preset camera stream address. And after the stream address corresponding to the camera 1 is found, live-action video stream data of the area A shot by the camera 1 is called according to the stream address.
In an embodiment, when the behavior detection name is violation entry detection, determining a core area in live-action video stream data according to the obtained behavior detection name, as shown in fig. 3, includes:
s301: and decoding the live-action video stream data.
S302: and determining a face position area in the decoded live-action video stream data, and taking the face position area as a core area.
In a specific embodiment, if it is required to detect whether entering different areas of the vault illegally, live-action video stream data of an area to be detected in the vault is obtained according to the flow in fig. 2, after the live-action video stream data is decoded, each frame of picture in the video stream is locked to perform face detection, namely, a face is locked, then the face is identified, the name, the work number and the corresponding authority area which are stored in a background and correspond to the face are checked, and then whether the authority area is consistent with the current area (current area) to be detected in the vault is compared, so that whether the person has the authority to enter the current area is judged.
In an embodiment, when the action detection name is point action detection, determining a core area in live-action video stream data according to the obtained action detection name, as shown in fig. 4, includes:
s401: and decoding the live-action video stream data.
S402: and determining a hand position area in the decoded live-action video stream data, and taking the hand position area as a core area.
In a specific embodiment, if the counting actions of the workers in the vault are to be monitored, the live-action video stream data of the to-be-detected area of the vault are firstly acquired according to the flow in fig. 2, and after the live-action video stream data are decoded, the hands of the workers in each frame of picture in the video stream are locked.
In an embodiment, capturing video stream data of a core area and comparing the captured video stream data with pre-stored data information to obtain a comparison result includes:
intercepting video stream data of a hand position area;
counting the number of point actions in the video stream data of the hand position area;
and comparing the counted number of point actions with a preset value to obtain a comparison result.
In a specific embodiment, a video stream of the hands of the workers in the vault is acquired, when the hands have a point counting action, a counter in the system is increased by 1, when the work transfer end time passes, whether the value of the point counting action counter is consistent with a set threshold value or not is compared, for example, if the value of the counter is greater than the threshold value, the workers are suspected to take valuables more, and when the value of the counter is not consistent with the threshold value, early warning information is sent to the related workers and the video is stored.
In one embodiment, when the behavior detection name is dressing behavior detection, determining a core area in live-action video stream data according to the obtained behavior detection name, as shown in fig. 5, includes:
s501: and decoding the live-action video stream data.
S502: and determining a human trunk position area in the decoded live-action video stream data, and taking the human trunk position area as a core area.
In a specific embodiment, if the dressing of the staffs in the vault is to be supervised, live-action video stream data of the to-be-detected area in the vault is firstly acquired according to the flow in fig. 2, and after the live-action video stream data is decoded, the human body trunk part in each frame of picture in the video is locked.
In an embodiment, capturing video stream data of a core area and comparing the captured video stream data with pre-stored data information to obtain a comparison result includes:
intercepting video stream data of a human body position area;
and detecting dressing information in the video stream data of the human trunk position area, and comparing the dressing information with prestored dressing image information to obtain a comparison result.
In a specific embodiment, the video image data of the human body part is acquired and then compared with the pre-stored standard dressing video data, and if the video image data of the human body part is inconsistent with the standard dressing video data, early warning information is sent to related personnel, so that the related personnel can timely perform corresponding processing after receiving the early warning information.
Based on the same inventive concept, the embodiment of the present application further provides a behavior specification detection apparatus, which can be used to implement the method described in the above embodiment, as described in the following embodiment. Because the principle of solving the problem of the behavior specification detection device is similar to that of the behavior specification detection method, the implementation of the behavior specification detection device can refer to the implementation of the behavior specification detection method, and repeated parts are not described again. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. While the system described in the embodiments below is preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
As shown in fig. 6, the present application provides a behavior specification detecting apparatus, including:
the video stream data calling unit 601 is configured to call corresponding live-action video stream data according to the acquired to-be-detected region of the vault and the configured camera stream address;
a core area locking unit 602, configured to determine a core area in live-action video stream data according to the obtained behavior detection name;
a comparing unit 603, configured to intercept video stream data in the core area and compare the video stream data with pre-stored data information to obtain a comparison result;
and a behavior determining unit 604, configured to determine whether there is an illegal behavior according to the comparison result.
In one embodiment, as shown in fig. 7, the video stream data retrieving unit 601 includes:
a target camera determination module 701, configured to determine a target camera of a to-be-detected region of a vault;
a stream address searching module 702, configured to search a stream address corresponding to a target camera from pre-configured camera stream addresses;
the video stream data obtaining module 703 is configured to call live-action video stream data shot by the target camera according to the stream address corresponding to the target camera.
In one embodiment, when the behavior detection name is a violation entry detection, as shown in fig. 8, the core region locking unit 602 includes:
a first decoding module 801, configured to decode live-action video stream data;
the face recognition module 802 is configured to determine a face position area in the decoded live-action video stream data, and use the face position area as a core area.
In one embodiment, when the action detection name is point action detection, as shown in fig. 9, the core region locking unit 602 includes:
a second decoding module 901, configured to decode live-action video stream data;
a hand motion recognition module 902, configured to determine a hand position area in the decoded live-action video stream data, and use the hand position area as a core area.
In one embodiment, the alignment unit includes:
the video intercepting module is used for intercepting video stream data of the hand position area;
the motion frequency counting module is used for counting the number of points in the video stream data of the hand position area;
and the frequency comparison module is used for comparing the counted number of the point actions with a preset value to obtain a comparison result.
In one embodiment, when the behavior detection name is dressing behavior detection, as shown in fig. 10, the core region locking unit 602 includes:
a third decoding module 1001, configured to decode live-action video stream data;
the trunk locking module 1002 is configured to determine a human trunk position area in the decoded live-action video stream data, and use the human trunk position area as a core area.
In one embodiment, the alignment unit includes:
the intercepting module is used for intercepting video stream data of the human trunk position area;
and the dressing comparison module is used for detecting dressing information in the video stream data of the human trunk position area and comparing the dressing information with prestored dressing image information to obtain a comparison result.
In the practical application process, the present application further provides a system for detecting the behavior specification of the vault worker, and a general block diagram of the system is shown in fig. 11, and the system includes: the system comprises cameras 11 of all areas of the vault, a detection unit 12, a main control unit 13, a video stream analysis unit 14, an illegal entry detection unit 15, a multi-person detection unit 16, a case detection unit 17, a work clothes detection unit 18, a point action detection unit 19, a face recognition unit 20, a notification display unit 21, a parameter setting unit 22 and a storage unit 23. Specifically, the method comprises the following steps:
the cameras 11 in all the areas of the vault are used for monitoring daily working behaviors in the vault without dead angles so as to prevent internal moral risks.
The detection unit 12 is used for detecting that video recording is being performed, and notifying corresponding personnel to perform related processing in time when an out-of-compliance point occurs.
The main control unit 13 is used for controlling the operation of the real-time detection unit and interacting with the cameras 11 of each region of the vault, the face recognition unit 21, the notification display unit 22, the parameter setting unit 23 and the storage unit 24.
The video stream parsing unit 14 is used for the video stream of the cameras 11 in each region of the vault, and performs a decoding operation on the video stream.
The illegal entry detection unit 15 is used for capturing faces of people entering each area of the vault, calling face recognition services of the face recognition unit 21 through the main control unit 13 to obtain information (such as Zhang III, available cash area and clear subarea) of each area entered by a person in compliance, comparing whether the area where the person is located in the real-time video is matched with the compliance area, and timely giving an early warning when the area is not consistent. The face snapshot can be realized by using an EfficientDet algorithm.
The multi-person detection unit 16 is used for detecting the number of people on duty when entering the specified area in the national treasury standard, and giving an early warning in time when the number of people is inconsistent with the specified number of people. Detection algorithms include, but are not limited to: yolo series, etc.
The bag detecting unit 17 is used for detecting whether a person carries a bag in a passageway area (a national treasury management rule that the person cannot carry bag articles when entering or exiting a core area), and immediately giving an early warning when the bag appears, and a detection algorithm includes but is not limited to: faster R-CNN, SSD, etc.
The work clothes detection unit 18 is used for detecting whether the workers in the cash deposit clearing zone wear corresponding work clothes according to the rules (because the cash clearing zone is mainly used for clearing cash, the general cash deposit management rules that when the workers enter the cash deposit clearing zone to work, the workers must wear corresponding work clothes, and the core of the work clothes is pocket-free), and the detection algorithm includes but is not limited to: RFBNet, RefineDet and the like, because the working clothes are uniformly dressed, the algorithm can be retrained by marking historical video data, and the purpose of distinguishing whether the working clothes are dressed or not is achieved.
The point action detection unit 19 is used for in the handing-over district, when in-row staff carries out the cash box handing-over with escort personnel, whether the staff has the point action, the detection algorithm can adopt 3D convolution network or double current network, obtain through training historical point action video, because of there are a plurality of handing-over links in one day, mark the historical point action video of handing-over link, train the learning model with the historical point action video after marking as the training sample, whether there is the point action staff when handing-over through the learning model discernment that trains.
The face recognition unit 20 is configured to perform management operations such as face registration, comparison, deletion, and the like on a worker in a row, and the face recognition algorithm may adopt open-source openface, faceNet, and the like.
The notification display unit 21 is used for notifying the vault manager in time through a message terminal (including a bracelet, a web computer, a pad, etc.) when an out-of-compliance behavior occurs, and the vault manager performs corresponding processing after obtaining the message.
The parameter setting unit 22 is configured to set parameters of non-compliant points, including area division, camera stream addresses, and the like, where the detection area defines a frame that can be enclosed by opencv, and detection items of the areas are as follows:
Figure BDA0002809160950000101
the storage unit 23 is used for storing early warning information, detecting points, storing corresponding videos, keeping photos at corresponding moments under other conditions, facilitating follow-up check and verification and improving video checking efficiency.
The method and the system have the advantages that real-time pictures are obtained by the aid of the cameras in all areas in the vault, the cameras are connected to the local computing nodes for processing, the non-compliance behaviors of workers in the vault are detected by the aid of a computer vision method, early warning is given to related workers in time when the non-compliance behaviors occur, the bank is assisted to supervise, and business risks are effectively prevented.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
An embodiment of the present application further provides a specific implementation manner of an electronic device capable of implementing all steps in the method in the foregoing embodiment, and referring to fig. 12, the electronic device specifically includes the following contents:
a processor (processor)1201, memory 1202, a communication Interface 1203, a bus 1204, and a nonvolatile memory 1205;
the processor 1201, the memory 1202 and the communication interface 1203 complete mutual communication through the bus 1204;
the processor 1201 is configured to call the computer programs in the memory 1202 and the nonvolatile memory 1205, and when the processor executes the computer programs, the processor implements all the steps in the method in the foregoing embodiments, for example, when the processor executes the computer programs, the processor implements the following steps:
s101: and calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address.
S102: and determining a core area in the live-action video stream data according to the acquired behavior detection name.
S103: and intercepting the video stream data of the core area and comparing the video stream data with the pre-stored data information to obtain a comparison result.
S104: and judging whether the violation behavior exists according to the comparison result.
Embodiments of the present application also provide a computer-readable storage medium capable of implementing all the steps of the method in the above embodiments, where the computer-readable storage medium stores thereon a computer program, and the computer program when executed by a processor implements all the steps of the method in the above embodiments, for example, the processor implements the following steps when executing the computer program:
s101: and calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address.
S102: and determining a core area in the live-action video stream data according to the acquired behavior detection name.
S103: and intercepting the video stream data of the core area and comparing the video stream data with the pre-stored data information to obtain a comparison result.
S104: and judging whether the violation behavior exists according to the comparison result.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment. Although embodiments of the present description provide method steps as described in embodiments or flowcharts, more or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or end product executes, it may execute sequentially or in parallel (e.g., parallel processors or multi-threaded environments, or even distributed data processing environments) according to the method shown in the embodiment or the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, in implementing the embodiments of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, and the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein. The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment of the specification.
In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. The above description is only an example of the embodiments of the present disclosure, and is not intended to limit the embodiments of the present disclosure. Various modifications and variations to the embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present specification should be included in the scope of the claims of the embodiments of the present specification.

Claims (16)

1. A method for detecting the behavior specification of a vault worker is characterized by comprising the following steps:
calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address;
determining a core area in the live-action video stream data according to the acquired behavior detection name;
intercepting the video stream data of the core area and comparing the video stream data with prestored data information to obtain a comparison result;
and judging whether the violation behavior exists according to the comparison result.
2. The behavior specification detection method according to claim 1, wherein the invoking of the corresponding live-action video stream data according to the obtained to-be-detected region of the vault and the configured camera stream address comprises:
determining a target camera of the to-be-detected area of the exchequer;
searching a stream address corresponding to the target camera from a preset camera stream address;
and calling live-action video stream data shot by the target camera according to the stream address corresponding to the target camera.
3. The behavior specification detection method according to claim 2, wherein when the behavior detection name is violation entry detection, the determining a core area in the live-action video stream data according to the obtained behavior detection name includes:
decoding the live-action video stream data;
and determining a face position area in the decoded live-action video stream data, and taking the face position area as the core area.
4. The behavior specification detection method according to claim 2, wherein when the behavior detection name is a point action detection, the determining a core area in the live-action video stream data according to the obtained behavior detection name includes:
decoding the live-action video stream data;
determining a hand position area in the decoded live-action video stream data, and taking the hand position area as the core area.
5. The behavior specification detection method according to claim 4, wherein the data information comprises: counting the preset value of the action times, intercepting the video stream data of the core area and comparing the video stream data with the prestored data information to obtain a comparison result, wherein the comparison result comprises the following steps:
intercepting video stream data of the hand position area;
counting the number of point actions in the video stream data of the hand position area;
and comparing the counted point action times with a preset point action time value to obtain a comparison result.
6. The behavior specification detection method according to claim 2, wherein when the behavior detection name is a dressing behavior detection, the determining a core area in the live-action video stream data according to the obtained behavior detection name includes:
decoding the live-action video stream data;
and determining a human trunk position area in the decoded live-action video stream data, and taking the human trunk position area as a core area.
7. The behavior specification detection method according to claim 6, wherein the data information comprises: the step of dressing image information, in which the video stream data of the core area is intercepted and compared with the prestored data information to obtain a comparison result, includes:
intercepting video stream data of the human body position area;
and detecting the dressing information in the video stream data of the human trunk position area, and comparing the dressing information with pre-stored dressing image information to obtain a comparison result.
8. The utility model provides a national treasury personnel behavior standard detection device which characterized in that includes:
the video stream data calling unit is used for calling corresponding live-action video stream data according to the acquired to-be-detected area of the vault and the configured camera stream address;
the core area locking unit is used for determining a core area in the live-action video stream data according to the acquired behavior detection name;
the comparison unit is used for intercepting the video stream data of the core area and comparing the video stream data with prestored data information to obtain a comparison result;
and the behavior judging unit is used for judging whether the violation behavior exists according to the comparison result.
9. The behavior specification detection apparatus according to claim 8, wherein the video stream data call unit comprises:
the target camera determining module is used for determining a target camera of the to-be-detected area of the exchequer;
the stream address searching module is used for searching a stream address corresponding to the target camera from a preset camera stream address;
and the video stream data acquisition module is used for calling the live-action video stream data shot by the target camera according to the stream address corresponding to the target camera.
10. The apparatus according to claim 9, wherein when the behavior detection name is violation entry detection, the core region locking unit includes:
the first decoding module is used for decoding the live-action video stream data;
and the face recognition module is used for determining a face position area in the decoded live-action video stream data and taking the face position area as the core area.
11. The apparatus according to claim 9, wherein when the behavior detection name is point action detection, the core region locking unit includes:
the second decoding module is used for decoding the live-action video stream data;
and the hand motion recognition module is used for determining a hand position area in the decoded live-action video stream data and taking the hand position area as the core area.
12. The behavior specification detection device according to claim 11, wherein the data information comprises: counting the preset value of the action times, wherein the comparison unit comprises:
the video intercepting module is used for intercepting video stream data of the hand position area;
the motion frequency counting module is used for counting the number of points in the video stream data of the hand position area;
and the frequency comparison module is used for comparing the counted number of the point actions with a preset value to obtain a comparison result.
13. The behavior specification detecting apparatus according to claim 9, wherein when the behavior detection name is an attachment behavior detection, the core region locking unit includes:
the third decoding module is used for decoding the live-action video stream data;
and the trunk locking module is used for determining a human trunk position area in the decoded live-action video stream data and taking the human trunk position area as a core area.
14. The behavior specification detection device according to claim 13, wherein the data information comprises: dress image information, the comparison unit includes:
the intercepting module is used for intercepting video stream data of the human trunk position area;
and the dressing comparison module is used for detecting the dressing information in the video stream data of the human trunk position area and comparing the dressing information with prestored dressing image information to obtain a comparison result.
15. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the behavior specification detection method of any one of claims 1 to 7 when executing the program.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the behavior specification detection method according to any one of claims 1 to 7.
CN202011380286.2A 2020-12-01 2020-12-01 Method and device for detecting national treasury personnel behavior specification Active CN112541410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011380286.2A CN112541410B (en) 2020-12-01 2020-12-01 Method and device for detecting national treasury personnel behavior specification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011380286.2A CN112541410B (en) 2020-12-01 2020-12-01 Method and device for detecting national treasury personnel behavior specification

Publications (2)

Publication Number Publication Date
CN112541410A true CN112541410A (en) 2021-03-23
CN112541410B CN112541410B (en) 2024-03-26

Family

ID=75016756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011380286.2A Active CN112541410B (en) 2020-12-01 2020-12-01 Method and device for detecting national treasury personnel behavior specification

Country Status (1)

Country Link
CN (1) CN112541410B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449592A (en) * 2021-05-18 2021-09-28 浙江大华技术股份有限公司 Escort task detection method, escort task detection system, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210338A (en) * 2019-05-17 2019-09-06 广东履安实业有限公司 The dressing information of a kind of pair of target person carries out the method and system of detection identification
CN110351535A (en) * 2019-08-14 2019-10-18 杭州品茗安控信息技术股份有限公司 A kind of building site monitoring system
CN110472870A (en) * 2019-08-15 2019-11-19 成都睿晓科技有限公司 A kind of cashier service regulation detection system based on artificial intelligence
CN110719441A (en) * 2019-09-30 2020-01-21 傅程宏 System and method for bank personnel behavior compliance early warning management
CN110738178A (en) * 2019-10-18 2020-01-31 思百达物联网科技(北京)有限公司 Garden construction safety detection method and device, computer equipment and storage medium
CN111783530A (en) * 2020-05-26 2020-10-16 武汉盛元鑫博软件有限公司 Safety system and method for monitoring and identifying behaviors in restricted area

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210338A (en) * 2019-05-17 2019-09-06 广东履安实业有限公司 The dressing information of a kind of pair of target person carries out the method and system of detection identification
CN110351535A (en) * 2019-08-14 2019-10-18 杭州品茗安控信息技术股份有限公司 A kind of building site monitoring system
CN110472870A (en) * 2019-08-15 2019-11-19 成都睿晓科技有限公司 A kind of cashier service regulation detection system based on artificial intelligence
CN110719441A (en) * 2019-09-30 2020-01-21 傅程宏 System and method for bank personnel behavior compliance early warning management
CN110738178A (en) * 2019-10-18 2020-01-31 思百达物联网科技(北京)有限公司 Garden construction safety detection method and device, computer equipment and storage medium
CN111783530A (en) * 2020-05-26 2020-10-16 武汉盛元鑫博软件有限公司 Safety system and method for monitoring and identifying behaviors in restricted area

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449592A (en) * 2021-05-18 2021-09-28 浙江大华技术股份有限公司 Escort task detection method, escort task detection system, electronic device and storage medium

Also Published As

Publication number Publication date
CN112541410B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN109299135B (en) Abnormal query recognition method, recognition equipment and medium based on recognition model
JP6732806B2 (en) Account theft risk identification method, identification device, and prevention/control system
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
CN106850346B (en) Method and device for monitoring node change and assisting in identifying blacklist and electronic equipment
CN110866820A (en) Real-time monitoring system, method, equipment and storage medium for banking business
CN105518709A (en) Method, system and computer program product for identifying human face
CN109710780A (en) A kind of archiving method and device
CN113095132B (en) Neural network based gas field identification method, system, terminal and storage medium
CN111898486B (en) Monitoring picture abnormality detection method, device and storage medium
CN109446936A (en) A kind of personal identification method and device for monitoring scene
CN108932456A (en) Face identification method, device and system and storage medium
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN112464030B (en) Suspicious person determination method and suspicious person determination device
CN111666915A (en) Monitoring method, device, equipment and storage medium
CN110516656A (en) Video monitoring method, device, computer equipment and readable storage medium storing program for executing
WO2020167155A1 (en) Method and system for detecting troubling events during interaction with a self-service device
CN112541410A (en) Method and device for detecting national treasury personnel behavior specifications
CN112750038B (en) Transaction risk determination method, device and server
CN111310612A (en) Behavior supervision method and device
CN112541661A (en) Method and device for detecting personnel behavior and environmental specification of network points
CN107301373B (en) Data processing method, device and storage medium
CN111091047B (en) Living body detection method and device, server and face recognition equipment
KR20200059643A (en) ATM security system based on image analyses and the method thereof
CN105427480A (en) Teller machine based on image analysis
CN108446819A (en) One kind being used for garden personal management trust evaluation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant