CN113158730A - Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium - Google Patents

Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium Download PDF

Info

Publication number
CN113158730A
CN113158730A CN202011625056.8A CN202011625056A CN113158730A CN 113158730 A CN113158730 A CN 113158730A CN 202011625056 A CN202011625056 A CN 202011625056A CN 113158730 A CN113158730 A CN 113158730A
Authority
CN
China
Prior art keywords
human
duty
shape
identification
shaped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011625056.8A
Other languages
Chinese (zh)
Inventor
梁昆
何牡禄
张轩铭
王利强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Tpson Technology Co ltd
Original Assignee
Hangzhou Tpson Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Tpson Technology Co ltd filed Critical Hangzhou Tpson Technology Co ltd
Priority to CN202011625056.8A priority Critical patent/CN113158730A/en
Publication of CN113158730A publication Critical patent/CN113158730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a multi-person on-duty identification method based on human shape identification, which comprises the following steps: setting system configuration items, including defining the identified area range, the number of identified people and the shortest off duty time of the identified people; identifying the human shape by using a human shape target detection algorithm in the region range, and sampling the human shape to obtain a human shape coordinate; comparing the human-shaped coordinates with the human-shaped coordinates obtained in the history, and calculating the mean square error of the human-shaped coordinates and the human-shaped coordinates obtained in the history to obtain variance data; judging whether the human figure moves according to the variance data, and if the human figure does not move within the preset time, indicating that the human figure falls asleep; and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty. Through the method and the device, the problem that people leave the post for identification and depend on manual work in the correlation technique based on human shape identification is solved, and people leave the post for identification based on human shape identification is realized.

Description

Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, an apparatus, an electronic apparatus, and a storage medium for identifying a plurality of people who are off duty based on human shape recognition.
Background
Along with the development of production capacity, fire safety is more and more important in social production. In order to ensure fire safety, most enterprises adopt a mode of arranging fire managers on duty to carry out fire safety on duty at present. However, if the fire-fighting control room is unattended, the operators on duty may cause fire safety hazards to cause fire disasters if the operators on duty play the role of the operators on duty. Therefore, the fire-fighting control room has great significance in guaranteeing reliable operation and effective management of the fire-fighting control room. At present, the requirement of a key fire-fighting duty room or a monitoring room on personnel leaving behind is very strict, and a plurality of duty positions must be ensured to be kept for 24 hours, so that the condition that a plurality of people are on duty is required to be ensured, the occurrence of sleeping duty and individual leaving duty is avoided, and the condition can be timely rectified and improved once the condition is found. The existing monitoring method mainly comprises the steps that a camera shoots a video and then uploads the video to a manager, and the manager performs manual spot check. The invention provides a multi-person off-duty algorithm based on an image algorithm, and provides a multi-person on-duty identification method based on human shape identification.
At present, an effective solution is not provided aiming at the problem that monitoring of multiple people depends on manual work when the multiple people leave the post in the related technology.
Disclosure of Invention
The embodiment of the application provides a method, a device, an electronic device and a storage medium for identifying a plurality of people who are off duty based on human shape identification, so as to at least solve the problem that the identification of the plurality of people who are off duty based on human shape identification in the related art depends on manual work.
In a first aspect, an embodiment of the present application provides a method for identifying multiple people who are off duty based on human shape identification, including:
step 1: setting system configuration items, including defining the identified area range, the number of identified people and the shortest off duty time of the identified people;
step 2: recognizing a human shape in the region range by using a human shape target detection algorithm, and sampling the human shape to obtain a human shape coordinate;
and step 3: comparing the human-shaped coordinates with the human-shaped coordinates obtained in history, and calculating the mean square error of the human-shaped coordinates and the human-shaped coordinates obtained in history to obtain variance data;
and 4, step 4: judging whether the human figure moves according to the variance data, and if the human figure does not move within the preset time, indicating that the human figure falls asleep;
and 5: and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty.
In one embodiment, the shortest off-duty time of the person is set as t, and the time for sampling the human-shaped coordinate is set as t
Figure BDA0002879095340000021
Human-shaped coordinates at times t1, t 2.
In one embodiment, the identifying a human shape by using a human shape target detection algorithm in the area range, and the sampling the human shape to obtain a human shape coordinate includes the following steps:
step 1: recognizing the human shape in the region by utilizing a human shape target detection algorithm;
step 2: carrying out segmentation processing and binarization processing on the human figure;
and step 3: extracting the human-shaped key point characteristics and the human-shaped coordinate rectangular frame, and filtering interference factors, wherein the interference factors comprise articles and pets;
and 4, step 4: and sampling the human shape to obtain a human shape coordinate.
In one embodiment, the human-shaped target detection algorithm is a DPM target detection algorithm, and human-shaped recognition is performed on input after a gradient model of a human body is obtained by calculating a gradient direction histogram and training a neural network.
In one embodiment, after step 5, the method further comprises: and sending the judgment result and the video data to the cloud.
In a second aspect, an embodiment of the present application provides a device for identifying multiple people who are off duty based on human shape identification, including:
the system comprises a presetting module, a monitoring module and a monitoring module, wherein the presetting module is used for setting system configuration items, including the definition of an identified area range, the number of identified people and the shortest off-duty time of identified people;
the identification module is used for identifying the human shape in the area range by utilizing a human shape target detection algorithm and sampling the human shape to obtain a human shape coordinate;
the calculation module is used for comparing the human-shaped coordinates with the human-shaped coordinates obtained in the history, and calculating the mean square error of the human-shaped coordinates and the human-shaped coordinates obtained in the history to obtain a group of variance data;
the judging module is used for judging whether the human figure moves according to the variance data, and if the human figure does not move within the preset time, the human figure is asleep; and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor executes the computer program to implement the method for identifying multiple people who are off duty based on human shape identification according to the first aspect.
In a fourth aspect, the present application provides a storage medium, in which a computer program is stored, where the computer program is configured to execute the method for identifying multiple persons who are away from work based on human shape identification according to the first aspect when running.
Compared with the prior art, the multi-person off-post identification method based on human shape identification, which is provided by the embodiment of the application, obtains the position parameter coordinates of each point of the human shape through identification, calculates the corresponding displacement, solves the problem that the multi-person off-post identification based on human shape identification in the prior art depends on manual work, and realizes intelligent identification of the person off-post condition through a computer and real-time uploading to a cloud.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flowchart of a method for identifying a plurality of people who are off duty based on human shape identification according to an embodiment of the application;
FIG. 2 is a flowchart of a method for identifying the person on duty in a fire-fighting attendant room based on an intelligent algorithm according to a preferred embodiment of the present application;
FIG. 3 is a block diagram of an off-duty human recognition apparatus based on human shape recognition according to an embodiment of the present application;
fig. 4 is a schematic diagram of a hardware structure of a multi-person off duty recognition device based on human shape recognition according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The embodiment provides a method for identifying multiple people who are off duty based on human shape identification. Fig. 1 is a flowchart of a method for identifying multiple people who are off duty based on human shape identification according to an embodiment of the present application, and as shown in fig. 1, the flowchart includes the following steps:
step S101: and setting system configuration items, including defining the range of the identified area, the number of the identified people and the shortest off duty time of the identified people.
In this embodiment, the system configuration item is a system performance parameter set autonomously before the monitoring and detecting system is installed, and the standards of the on-duty personnel, such as the range of motion and the off-duty time, are adjusted according to the system performance parameter. The standard can be set and modified by a manager independently, has flexibility and can be changed according to actual conditions.
Step S102: and recognizing the human shape by using a human shape target detection algorithm in the region range, and sampling the human shape to obtain a human shape coordinate.
Step S103: and comparing the human-shaped coordinates with the human-shaped coordinates acquired in history, and calculating the mean square error of the human-shaped coordinates and the human-shaped coordinates acquired in history to obtain variance data.
Step S104: and judging whether the human figure moves according to the variance data, and if the human figure does not move within the preset time, indicating that the human figure falls asleep.
Step S105: and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty.
In the embodiment, the video stream is acquired through the camera, then the video stream is preprocessed and recognized, the key features of the human figure are extracted, whether the human figure falls asleep or leaves the post is judged according to the historical recognition message, and the human figure is sent to the cloud so that a manager can check the human figure.
In one embodimentIn the middle, the shortest off duty time of a person is set as t, and the time for sampling the human-shaped coordinates is set as
Figure BDA0002879095340000051
Human-shaped coordinates at times t1, t 2. The sampling interval and the sampling frequency can be determined according to the shortest off-duty time of the personnel after the shortest off-duty time of the personnel is determined, the higher the sampling frequency is, the more the sample size is, the more accurate the acquired data is, and the data transmission pressure and the data processing pressure are also higher by taking the shortest off-duty time of the personnel as a reference, so that the selection is made
Figure BDA0002879095340000052
For the optimal sampling frequency, the sampling frequency is reduced on the premise of ensuring the sample capacity, and the system configuration is optimized.
In one embodiment, the human-shaped target detection algorithm is a DPM target detection algorithm, and human-shaped recognition is performed on input after a gradient model of a human body is obtained by calculating a gradient direction histogram and using neural network training. The DPM algorithm is a detection method based on components, and has strong robustness on deformation of a target. The DPM algorithm adopts an improved HOG characteristic, an SVM classifier and a Sliding window (Sliding Windows) detection idea, adopts a multi-Component (Component) strategy aiming at the multi-view problem of the target, and adopts a Component model strategy based on a graph Structure (Pictorial Structure) aiming at the deformation problem of the target. Further, the model type to which the sample belongs, the position of the component model, and the like are automatically determined by Multiple-instance Learning (Latent Variable). The detection flow formula of the DPM target detection algorithm is as follows:
Figure BDA0002879095340000053
in the dimension loOf (a) with (x)0,y0) Is the detection score of the anchor point. Since there are multiple components in the same target and the detection scores of different component models need to be aligned, it is necessary to set the offset coefficient b. Since the resolution of the part model is twice that of the root model, the part model needs to be at the scale level l0And (4) matching layers. Therefore, the coordinates of the anchor point also need to be remapped to the scale layer l0
In one embodiment, after step 5, the method further comprises: and sending the judgment result and the video data to the cloud. And after receiving the judgment result and the video data, the cloud end pushes the judgment result and the video data to a manager to prompt the manager to process the work of sleeping and leaving the post of the staff.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
Fig. 2 is a flowchart of a fire-fighting attendant recognizing method based on an intelligent algorithm according to a preferred embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201: and setting system configuration items, including defining the range of the identified area, the number of the identified people and the shortest off duty time of the identified people.
Step S202: and recognizing the human figure by utilizing a human figure target detection algorithm in the area range.
Step S203: the human figure is subjected to segmentation processing and binarization processing.
Step S204: and (4) extracting the human-shaped key point characteristics and the human-shaped coordinate rectangular frame, and filtering interference factors.
Step S205: and sampling the human shape to obtain a human shape coordinate.
Step S206: and comparing the human-shaped coordinates with the human-shaped coordinates acquired in history, and calculating the mean square error of the human-shaped coordinates and the human-shaped coordinates acquired in history to obtain variance data.
Step S207: and judging whether the human figure moves according to the variance data, and if the human figure does not move within the preset time, indicating that the human figure falls asleep.
Step S208: and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty.
In this embodiment, the human shape is segmented, specifically, image thresholding is performed, and the method has the characteristics of simple implementation, small calculation amount and stable performance. Meanwhile, binarization is a process of setting the gray value of a pixel point on an image to be 0 or 255, namely, the whole image presents an obvious black-and-white effect. The human shape can be quickly determined by setting image binaryzation, so that the data volume in the image is greatly reduced, and the outline of the target can be highlighted.
Fig. 3 is a block diagram of a fire-fighting attendant recognition device based on an intelligent algorithm according to an embodiment of the present application, and as shown in fig. 3, the device includes: the preset module 31 is used for setting system configuration items, including defining the identified area range, the number of identified people and the shortest off duty time of identified people; the identification module 32 is used for identifying the human shape in the region range by utilizing a human shape target detection algorithm, and sampling the human shape to obtain a human shape coordinate; the calculation module 33 is configured to compare the human-shaped coordinates with the human-shaped coordinates obtained in the history, and calculate a mean square error between the human-shaped coordinates and the human-shaped coordinates obtained in the history to obtain a set of variance data; the judging module 34 is configured to judge whether the figure moves according to the variance data, and if the figure does not move within a preset time, it is determined that the figure falls asleep; and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty.
The embodiment also provides a device for identifying the presence of a plurality of people who leave the post based on human shape identification, which is used for realizing the above embodiments and preferred embodiments, and the description of the device is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In addition, the method for identifying the multiple people who are on duty based on the human shape identification in the embodiment of the application described in connection with fig. 1 can be realized by the identification equipment for identifying the multiple people who are on duty based on the human shape identification. Fig. 4 is a schematic diagram of a hardware structure of a multi-person off duty recognition device based on human shape recognition according to an embodiment of the application.
The human-shape-recognition-based multi-person on-Shift recognition device may include a processor 41 and a memory 42 storing computer program instructions.
Specifically, the processor 41 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 44 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 44 may include a Hard Disk Drive (Hard Disk Drive, abbreviated HDD), a floppy Disk Drive, a Solid State Drive (OPENCV), flash memory, an optical Disk, a magneto-optical Disk, magnetic tape, or a Universal Serial Bus (USB) Drive, or a combination of two or more of these. Memory 44 may include removable or non-removable (or fixed) media, where appropriate. The memory 44 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 44 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 44 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
Memory 44 may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by processor 42.
The processor 41 reads and executes the computer program instructions stored in the memory 42 to implement any of the above-described methods for human-shape recognition based multi-person off duty recognition.
In some of these embodiments, the multiple person off Shift identification device based on human shape identification may also include a communication interface 43 and bus 40. As shown in fig. 4, the processor 41, the memory 42, and the communication interface 43 are connected via the bus 40 to complete mutual communication.
The communication interface 43 is used for implementing communication between modules, devices, units and/or apparatuses in the embodiments of the present application. The communication interface 43 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
Bus 40 includes hardware, software, or both to couple the components of the human-shape recognition based multiple-person off duty recognition device to each other. Bus 40 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 40 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a Hyper Transport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI (Peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Bus (audio Electronics Association), abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 40 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The multi-person on-duty identification device based on human shape identification can execute the multi-person on-duty identification method based on human shape identification in the embodiment of the application based on the acquired multi-person on-duty identification based on human shape identification, thereby realizing the multi-person on-duty identification method based on human shape identification described in combination with fig. 1.
In addition, in combination with the method for identifying multiple people who are off duty based on human shape identification in the above embodiments, the embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the above embodiments of the method for human-shape recognition based on multi-person off duty recognition.
Compared with the prior art, the method has the following advantages:
1. according to the method, the intelligent algorithm of the computer is utilized, the on-duty condition of the staff is automatically monitored through the camera, the on-duty condition and the off-duty condition of the staff are automatically judged, and full-automatic monitoring and identification are realized.
2. When the method and the device are used in different assessment scenes, different staff sleeping time and off-duty time can be set independently, and the method and the device have wide applicability in different use scenes.
3. According to the method for measuring, automatically identifying and judging, manual guarding and monitoring are not needed, and the human resource cost is saved.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A multi-person on-duty identification method based on human shape identification is characterized by comprising the following steps:
step 1: setting system configuration items, including defining the identified area range, the number of identified people and the shortest off duty time of the identified people;
step 2: recognizing a human shape in the region range by using a human shape target detection algorithm, and sampling the human shape to obtain a human shape coordinate;
and step 3: comparing the human-shaped coordinates with the human-shaped coordinates obtained in history, and calculating the mean square error of the human-shaped coordinates and the human-shaped coordinates obtained in history to obtain variance data;
and 4, step 4: judging whether the human figure moves according to the variance data, and if the human figure does not move within the preset time, indicating that the human figure falls asleep;
and 5: and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty.
2. The method as claimed in claim 1, wherein the shortest off-duty time of the person is t, and the sampling time of the human-shaped coordinates is t
Figure FDA0002879095330000011
Human-shaped coordinates at times t1, t 2.
3. The method for identifying the presence of a plurality of people who are off duty based on the human figure identification as claimed in claim 1, wherein the step of identifying the human figure by utilizing a human figure target detection algorithm in the area range and sampling the human figure to obtain the human figure coordinate comprises the following steps:
step 1: recognizing the human shape in the region by utilizing a human shape target detection algorithm;
step 2: carrying out segmentation processing and binarization processing on the human figure;
and step 3: extracting the human-shaped key point characteristics and the human-shaped coordinate rectangular frame, and filtering interference factors, wherein the interference factors comprise articles and pets;
and 4, step 4: and sampling the human shape to obtain a human shape coordinate.
4. The method as claimed in claim 3, wherein the human-shaped object detection algorithm is a DPM object detection algorithm, and the human-shaped object detection algorithm is used for recognizing human shapes of the input after calculating a gradient direction histogram and using a neural network to train to obtain a gradient model of the human body.
5. The method for identifying multiple people who are off duty based on human shape recognition as claimed in claim 1, wherein after step 5, the method further comprises: and sending the judgment result and the video data to the cloud.
6. A multi-person off-duty recognition device based on human shape recognition is characterized by comprising:
the system comprises a presetting module, a monitoring module and a monitoring module, wherein the presetting module is used for setting system configuration items, including the definition of an identified area range, the number of identified people and the shortest off-duty time of identified people;
the identification module is used for identifying the human shape in the area range by utilizing a human shape target detection algorithm and sampling the human shape to obtain a human shape coordinate;
the calculation module is used for comparing the human-shaped coordinates with the human-shaped coordinates obtained in the history, and calculating the mean square error of the human-shaped coordinates and the human-shaped coordinates obtained in the history to obtain a group of variance data;
the judging module is used for judging whether the human figure moves according to the variance data, and if the human figure does not move within the preset time, the human figure is asleep; and judging whether the human figure leaves the region range or not according to the variance data, and if the human figure leaves the region range, indicating that the human figure is off duty.
7. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the human-shape recognition based multi-person on-Shift recognition method of any one of claims 1 to 5.
8. A storage medium having a computer program stored thereon, wherein the computer program is configured to execute the method for human-form recognition based multi-person on-Shift recognition according to any one of claims 1 to 5 when running.
CN202011625056.8A 2020-12-31 2020-12-31 Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium Pending CN113158730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011625056.8A CN113158730A (en) 2020-12-31 2020-12-31 Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011625056.8A CN113158730A (en) 2020-12-31 2020-12-31 Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113158730A true CN113158730A (en) 2021-07-23

Family

ID=76878185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011625056.8A Pending CN113158730A (en) 2020-12-31 2020-12-31 Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113158730A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489929A (en) * 2021-08-06 2021-10-08 远见智诚市场调研咨询(广东)有限公司 Method and device for monitoring supervision of garbage classification, computer equipment and storage medium
CN116071832A (en) * 2023-04-06 2023-05-05 浪潮通用软件有限公司 Sleep behavior monitoring method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160351031A1 (en) * 2014-02-05 2016-12-01 Min Sung Jo Warning System and Method Using Spatio-Temporal Situation Data
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN110363114A (en) * 2019-06-28 2019-10-22 深圳市中电数通智慧安全科技股份有限公司 A kind of person works' condition detection method, device and terminal device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160351031A1 (en) * 2014-02-05 2016-12-01 Min Sung Jo Warning System and Method Using Spatio-Temporal Situation Data
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN110363114A (en) * 2019-06-28 2019-10-22 深圳市中电数通智慧安全科技股份有限公司 A kind of person works' condition detection method, device and terminal device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489929A (en) * 2021-08-06 2021-10-08 远见智诚市场调研咨询(广东)有限公司 Method and device for monitoring supervision of garbage classification, computer equipment and storage medium
CN116071832A (en) * 2023-04-06 2023-05-05 浪潮通用软件有限公司 Sleep behavior monitoring method, device, equipment and medium

Similar Documents

Publication Publication Date Title
WO2019091012A1 (en) Security check method based on facial recognition, application server, and computer readable storage medium
US11354901B2 (en) Activity recognition method and system
CN106557726B (en) Face identity authentication system with silent type living body detection and method thereof
CN109492612B (en) Fall detection method and device based on bone points
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
WO2019127273A1 (en) Multi-person face detection method, apparatus, server, system, and storage medium
WO2019011165A1 (en) Facial recognition method and apparatus, electronic device, and storage medium
CN109285234B (en) Face recognition attendance checking method and device, computer device and storage medium
CN105590097B (en) Dual camera collaboration real-time face identification security system and method under the conditions of noctovision
CN110414376B (en) Method for updating face recognition model, face recognition camera and server
CN110647822A (en) High-altitude parabolic behavior identification method and device, storage medium and electronic equipment
CN108108711B (en) Face control method, electronic device and storage medium
CN113158730A (en) Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN103605971A (en) Method and device for capturing face images
CN110827432B (en) Class attendance checking method and system based on face recognition
CN113723157B (en) Crop disease identification method and device, electronic equipment and storage medium
CN112364803A (en) Living body recognition auxiliary network and training method, terminal, equipment and storage medium
CN108875509A (en) Biopsy method, device and system and storage medium
CN111435437A (en) PCB pedestrian re-recognition model training method and PCB pedestrian re-recognition method
CN111091047B (en) Living body detection method and device, server and face recognition equipment
CN115424335B (en) Living body recognition model training method, living body recognition method and related equipment
CN112347988A (en) Mask recognition model training method and device, computer equipment and readable storage medium
CN114898475A (en) Underground personnel identity identification method and device, electronic equipment and readable storage medium
CN112861711A (en) Regional intrusion detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723

RJ01 Rejection of invention patent application after publication