CN116434296A - Real-time face recognition monitoring behavior method, device, equipment and medium - Google Patents

Real-time face recognition monitoring behavior method, device, equipment and medium Download PDF

Info

Publication number
CN116434296A
CN116434296A CN202310233484.3A CN202310233484A CN116434296A CN 116434296 A CN116434296 A CN 116434296A CN 202310233484 A CN202310233484 A CN 202310233484A CN 116434296 A CN116434296 A CN 116434296A
Authority
CN
China
Prior art keywords
experimental operation
experiment
experimental
images
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310233484.3A
Other languages
Chinese (zh)
Inventor
蒋博峰
王明
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen China Ark Information Industry Co ltd
Original Assignee
Shenzhen China Ark Information Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Ark Information Industry Co ltd filed Critical Shenzhen China Ark Information Industry Co ltd
Priority to CN202310233484.3A priority Critical patent/CN116434296A/en
Publication of CN116434296A publication Critical patent/CN116434296A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application relates to the technical field of monitoring and recognition, in particular to a method, a device, equipment and a medium for monitoring behaviors through real-time face recognition, wherein the method comprises the following steps: acquiring face images and experimental operation action images of an experiment operator, wherein the face images and the experimental operation action images are acquired by a plurality of camera acquisition devices; identifying the category of the experimental operation action based on the images of all the experimental operation actions, and matching the corresponding standard experimental operation action according to the category of the experimental operation action; judging whether the experimental operation action is a standard experimental operation action or not based on the standard experimental operation action to obtain a judging result; if the judgment result is that the experiment operation action is not standard, the identity information of the experiment operator is determined based on the face image, and a standard reminding signal is generated according to the identity information of the experiment operator and the judgment result so as to remind the experiment operator. The technical effect that this application had is the incidence of reduction incident.

Description

Real-time face recognition monitoring behavior method, device, equipment and medium
Technical Field
The application relates to the technical field of monitoring and recognition, in particular to a method, a device, equipment and a medium for monitoring behaviors through face recognition in real time.
Background
In experimental operation, an experimental operator can firstly input own identity information and then enter an experimental operation area provided with a plurality of shooting acquisition devices, and the shooting acquisition devices can monitor experimental operation behaviors of the experimental operator in real time. And when the experiment operation is carried out, the occurrence rate of safety accidents can be effectively reduced by the experiment operator to execute standard experiment operation actions.
Thus, it is important to identify whether the experimental operation actions of the experimental operator are standard.
Disclosure of Invention
In order to reduce the occurrence rate of safety accidents, the application provides a real-time face recognition monitoring behavior method, device, equipment and medium.
In a first aspect, the present application provides a real-time face recognition behavior monitoring method, which adopts the following technical scheme:
a real-time face recognition monitoring behavior method, comprising:
acquiring face images and experimental operation action images of an experiment operator, wherein the face images and the experimental operation action images are acquired by a plurality of camera acquisition devices;
identifying the category of the experimental operation action based on the images of all the experimental operation actions, and matching the corresponding standard experimental operation action according to the category of the experimental operation action;
Judging whether the experimental operation action is a standard experimental operation action or not based on a standard experimental operation action to obtain a judging result;
if the judging result is that the experiment operation action is not standard, the identity information of the experiment operator is determined based on the face image, and a standard reminding signal is generated according to the identity information of the experiment operator and the judging result so as to remind the experiment operator.
By adopting the technical scheme, the real-time monitoring of the operation behaviors of each experiment operator can be realized by acquiring the face images and the images of the experiment operation actions of the experiment operator; according to the category to which the experimental operation action belongs, the standard experimental operation action is obtained, whether the experimental operation action is standard or not can be accurately judged according to the category to which the experimental operation action belongs and the standard operation action, and under the condition that the experimental operation action is determined to be the non-standard experimental operation action, a standard reminding signal is generated according to the identity information of the experimental operator and the judging result. Through carrying out face identification, whether experiment operation action is standard and normative warning signal to experiment operator, can realize the normative to every experiment operator and experiment operation action, further reduce the incidence of incident.
In one possible implementation, the identifying the category of the experimental operation action based on the images of all experimental operation actions includes:
determining a mobile limb of the experiment operator based on all images of the experimental operation actions;
determining position information corresponding to a plurality of key points of the mobile limb in each image of the experimental operation action based on the mobile limb of the experimental operator;
extracting a plurality of experimental operation equipment images from all images of experimental operation actions, wherein each experimental operation equipment image comprises a plurality of experimental operation equipment;
determining the moving track of the key points according to the key point position information of all the images of the experimental operation action aiming at each key point;
determining the category of the experimental operation action based on the moving track of all key points of the experimental operator and a plurality of experimental operation equipment images.
By adopting the technical scheme, the position information of the movable limb of the experiment operator and a plurality of key points of the movable limb is determined through all images of the experiment operation action, so that the movement track of the key points can be accurately judged; meanwhile, a plurality of experimental operation equipment images are extracted from all images of the experimental operation actions, so that accurate judgment of the category of the experimental operation actions can be achieved from the moving track of all key points of an experimental operator and two dimensions of the plurality of experimental operation equipment images, and the behavior of the experimental operator can be further accurately judged through judgment of the category of the experimental operation actions.
In one possible implementation manner, the determining the category of the experimental operation action based on the movement track of all key points of the experimental operator and a plurality of experimental operation device images includes:
determining a plurality of initially belonging categories of experimental operation actions based on the movement tracks of all key points of the experimental operator;
acquiring distance information between each key point of a mobile limb of an experiment operator and each experiment operation device in each image of the experiment operation action;
performing equipment identification according to all distance information, a plurality of experimental operation equipment images and preset equipment identification rules to determine target experimental operation equipment;
determining the category of the experimental operation action based on the target experimental operation equipment and a plurality of initial categories of the experimental operation action.
Through adopting the technical scheme, a plurality of initially belongings of the mobile operation action can be determined through the mobile track of all key points of the experiment operator, and then the range of the category belonged to the experimental operation action can be reduced, meanwhile, the target experiment operation equipment can be determined through the distance information of each key point of the mobile limb and each experiment operation equipment, a plurality of experiment operation equipment images and a preset equipment identification rule, the category belonged to the experimental operation action can be determined from the plurality of initially belongings based on the angle of the target experiment operation equipment, and the confirmation of the category belonged to the experimental operation action can be realized only through the target experiment operation equipment, so that the identification efficiency is effectively improved.
In one possible implementation manner, the device identification according to all distance information and a plurality of experimental operation devices is expected to determine a target experimental operation device, including:
judging whether the distance between each key point of the mobile limb and each experimental operation device is not more than a preset distance threshold value or not based on all the distance information;
matching experimental operation equipment images corresponding to the fact that the distance between each key point of the mobile limb and each experimental operation equipment is larger than a preset distance threshold value from a plurality of experimental operation equipment images when the distance between each key point of the mobile limb and each experimental operation equipment is not larger than the preset distance threshold value;
and identifying the corresponding experimental operation equipment image to determine target experimental operation equipment.
Through adopting above-mentioned technical scheme, can select the experimental operation equipment image that is close with the key point distance of removal limbs according to distance information and preset distance threshold, and from experimental operation equipment's quantity and experimental operation equipment image dimension, can further reduce the screening scope, can adopt different recognition modes according to experimental operation equipment's quantity simultaneously, improved the discernment precision.
In one possible implementation manner, the determining identity information of the experiment operator based on the face image includes:
extracting iris images of the experiment operators based on the face images, and performing identity verification based on the iris images of the experiment operators to determine first identity information of the experiment operators;
extracting face characteristic region information based on the face image, and determining second identity information of an experiment operator based on the face characteristic region information;
and determining the identity information of the experiment operator based on the first identity information of the experiment operator and the second identity information of the experiment operator.
By adopting the technical scheme, the first identity information of the experiment operator is determined through the iris image in the face image, the second identity information of the experiment operator can be determined through the face characteristic area information, and the accurate positioning of the identity information of the experiment operator is realized through the first identity information and the second identity information.
In one possible implementation manner, before the step of acquiring the face image of the experiment operator and the image of the experiment operation action, the method further includes:
identifying dressing information of an experiment operator when the experiment operator does not enter an experiment operation area, wherein the dressing information comprises: clothing and shoe ornament wearing information of the experiment operator, hairstyle information of the experiment operator and hand information of the experiment operator;
Judging whether the dressing of the experiment operator accords with the wearing specification of the experiment operation area or not based on the dressing information of the experiment operator;
if yes, prompting an experiment operator to enter an experiment operation area;
if the wearing state is not met, a wearing non-standard prompt signal is sent out.
Through adopting above-mentioned technical scheme, when experiment operator did not get into the experiment operation region, through the clothing to experiment operator, shoes decorations dress information, hairstyle information and hand information, can carry out comprehensive discernment to experiment operator's clothing. And through the dressing information of discernment experiment operator, can avoid experiment operator to get into experiment operation area under the condition of dressing non-normative, also can reduce the probability that the incident takes place because dressing non-normative simultaneously, further improve the security of experiment operation.
In one possible implementation manner, after the acquiring the face image of the experiment operator and the image of the experiment operation action, the method further includes:
preprocessing the face image and the experimental operation action image of the experimental operator, wherein the processing mode comprises the following steps: graying treatment and mean value removal;
correspondingly, identifying the category to which the first experimental operation action belongs based on the images of all the experimental operation actions, and matching the corresponding preset standard experimental operation actions according to the category to which the first experimental operation action belongs, wherein the method comprises the following steps:
And identifying the category to which the first experimental operation action belongs based on the images of all the preprocessed experimental operation actions, and matching the corresponding preset standard experimental operation actions according to the category to which the first experimental operation action belongs.
By adopting the technical scheme, after the face image of the experiment operator and the image of the experiment operation action are acquired, the image is processed, the image processing speed is improved by carrying out gray processing on the image, the image definition can be improved by image scaling, the characteristics of the experiment operation action can be highlighted by using the mean value removal, and the recognition efficiency is effectively improved.
In a second aspect, the present application provides a real-time face recognition monitoring behavior device, which adopts the following technical scheme:
a real-time face recognition monitoring behavior device, comprising:
the image acquisition module is used for acquiring face images of an experiment operator and images of experiment operation actions, wherein the face images and the images of the experiment operation actions are acquired by a plurality of camera acquisition devices;
the category identification module is used for identifying the category of the experimental operation action based on the images of all the experimental operation actions, and matching the corresponding preset standard experimental operation action according to the category of the experimental operation action;
The judging module is used for judging whether the experimental operation action is a standard experimental operation action or not based on a preset standard experimental operation action to obtain a judging result, and triggering the reminding module if the judging result is an irregular experimental operation action;
and the reminding module is used for determining the identity information of the experiment operator based on the face image and generating a standard reminding signal according to the identity information of the experiment operator and the judging result so as to remind the experiment operator.
Third, the application provides an electronic equipment, adopts following technical scheme:
at least one processor;
a memory;
at least one application program, wherein the at least one application program is stored in the memory and configured to be executed by the at least one processor, the at least one application program configured to: and executing the real-time face recognition monitoring behavior method.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the above-described real-time face recognition monitoring behavior method.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the real-time monitoring of the operation behaviors of each experiment operator can be realized by acquiring the face images and the images of the experiment operation actions of the experiment operator; according to the category to which the experimental operation action belongs, the standard experimental operation action is obtained, and according to the category to which the experimental operation action belongs and the standard operation action, whether the experimental operation action is standard or not can be accurately judged, and under the condition that the experimental operation action is determined to be the non-standard experimental operation action, a standard reminding signal is generated according to the identity information of the experimental operator and the judging result. Through carrying out face identification, whether experiment operation action is standard and normative warning signal to experiment operator, can realize the normative to every experiment operator and experiment operation action, further reduce the incidence of incident.
2. When the experiment operator does not enter the experiment operation area, the clothes, the shoe ornament wearing information, the hair style information and the hand information of the experiment operator can be comprehensively identified. By identifying dressing information of the experiment operator, the experiment operator can be prevented from entering an experiment operation area under the condition that dressing is not standard, and the effect of standardizing dressing of the experiment operator can be achieved while the probability of safety accidents caused by the fact that dressing is not standard is reduced.
Drawings
Fig. 1 is a flow chart of a real-time face recognition monitoring behavior method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a real-time face recognition behavior monitoring method and device according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to fig. 1-3.
The present embodiment is merely illustrative of the present application and is not intended to be limiting, and those skilled in the art, after having read the present specification, may make modifications to the present embodiment without creative contribution as required, but is protected by patent laws within the scope of the present application.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, unless otherwise specified, the term "/" generally indicates that the associated object is an "or" relationship.
Embodiments of the present application are described in further detail below with reference to the drawings attached hereto.
In experimental operation, an experimental operator can firstly input own identity information and then enter an experimental operation area provided with a plurality of shooting acquisition devices, and the shooting acquisition devices can monitor experimental operation behaviors of the experimental operator in real time. And when the experiment operation is carried out, the occurrence rate of safety accidents can be effectively reduced by the experiment operator to execute standard experiment operation actions. Thus, it is important to identify whether the experimental operation actions of the experimental operator are standard.
In order to solve the technical problems, the embodiment of the application provides a real-time face recognition monitoring behavior method, which is used for acquiring face images and experimental operation action images of an experiment operator, wherein the face images and the experimental operation action images are acquired by a plurality of camera acquisition devices, identifying the category of the experimental operation action based on the images of all the experimental operation actions, and matching corresponding standard experimental operation actions according to the category of the experimental operation action; judging whether the experimental operation action is a standard experimental operation action according to the standard experimental operation action to obtain a judging result; if the judgment result is that the experiment operation action is not standard, the identity information of the experiment operator is determined according to the face image, and a standard reminding signal is generated according to the identity information of the experiment operator and the judgment result so as to remind the experiment operator.
The embodiment of the application provides a real-time face recognition monitoring behavior method, which is executed by electronic equipment, wherein the electronic equipment can be a server or terminal equipment, and the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. The terminal device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., but is not limited thereto, and the terminal device and the server may be directly or indirectly connected through a wired or wireless communication manner, which is not limited herein.
Referring to fig. 1, fig. 1 is a flowchart of a real-time face recognition monitoring behavior method provided in an embodiment of the present application, where the method includes steps S101, S102, S103, and S104, where:
step S101, acquiring face images of an experiment operator and images of experiment operation actions, wherein the face images and the images of the experiment operation actions are acquired by a plurality of camera acquisition devices.
Specifically, after receiving the recognition request, acquiring a face image of an experiment operator and an image of an experiment operation action in an experiment operation area, wherein a monitoring program is integrated in the electronic device in advance, the monitoring program is used for monitoring the triggering behavior of the recognition request, and acquiring the face image of the experiment operator and the image of the experiment operation action in the experiment operation area once the recognition request is monitored to be triggered. Specifically, after the user determines the recognition, a recognition instruction is automatically generated, wherein the recognition confirmation mode may include that the user clicks a recognition button on the application program to confirm the recognition, or that the user confirms the recognition through voice, when the electronic device detects that the user triggers a recognition request, the electronic device issues the recognition instruction, the image capturing and collecting device collects and uploads a face image of the experiment operator and an image of the experiment operation action in real time based on the recognition instruction, and the electronic device recognizes the image.
In the embodiment of the application, in order to realize the omnibearing collection of the face image of the experiment operator and the image of the experiment operation action, each shooting and collecting device is positioned in different azimuth, and the installation position of the shooting and collecting device can be selected by a user according to actual requirements or set by the user according to experience. In order to further acquire images at a more complete angle, the number of the image capturing and collecting devices at least comprises four, namely, at least one image capturing and collecting device is arranged on each wall surface of the experimental operation area. The real-time face recognition monitoring behavior method provided by the embodiment of the application can be applied to chemical experiments or microbiological experiments.
The camera shooting acquisition device can acquire visible light images and infrared images. When the acquired face image of the experimental operator is a forward face image, the image of the whole face outline should be included, and the elements in the whole face outline include: the forehead, eyebrows, eyes, nose, mouth and ears of the experimental operator; when the obtained face image of the experimental operator is not a forward face image, at least three elements in the whole face outline should be included. The images of the experimental operation actions are images of multiple continuous experimental operation actions. The number of the experiment operators can be one or a plurality of experiment operators.
Step S102, identifying the category of the experimental operation action based on the images of all the experimental operation actions, and matching the corresponding standard experimental operation action according to the category of the experimental operation action.
Specifically, the identification of the category to which the experimental operation action belongs may be determined by the mobile limb of the experimental operator and the target experimental operation equipment, or by a three-dimensional posture model.
The manner of determining the category based on the mobile limb of the laboratory operator and the target laboratory operating device may include: determining a mobile limb of an experiment operator based on all images of the experiment operation action, and determining position information corresponding to a plurality of key points of the mobile limb in each image of the experiment operation action based on the mobile limb of the experiment operator; extracting a plurality of experimental operation equipment images from all images of the experimental operation action, and determining a movement track of each key point according to the key point position information of all the images of the experimental operation action aiming at each key point; the category of the experimental operation action is determined based on the movement tracks of all key points of the experimental operator and the images of the plurality of experimental operation devices.
The manner of determining the category based on the three-dimensional pose model may include: the images of all experimental operation actions are input into a three-dimensional gesture model, so that a three-dimensional gesture model of the experimental operation actions can be generated, and the three-dimensional gesture model is generated through three-dimensional modeling software. And matching the generated three-dimensional gesture model of the experimental operation action with a three-dimensional gesture model stored in the database in advance, and when the similarity between the generated three-dimensional gesture model of the experimental operation action and any three-dimensional gesture model in the database is greater than a first preset similarity threshold value, determining the category of the experimental operation action corresponding to the image of the experimental operation action as the category of the experimental operation action corresponding to the three-dimensional gesture model in the database, and further determining the category of the experimental operation action. The first preset similarity threshold is not limited in the embodiment of the application.
In this embodiment of the present application, the category to which the experimental operation action belongs may at least include: dropping, stirring, draining, filtering, cleaning, sterilizing and observing. The moving limb may be a head or a hand or an arm. After the electronic equipment determines the category to which the experimental operation action belongs, the corresponding standard experimental operation action can be selected from a category database to which the experimental operation action belongs, wherein the category database to which the experimental operation action belongs is prestored in the electronic equipment, and the database stores the standard experimental operation action corresponding to each experimental operation action, and the standard experimental operation action can be demonstrated by a professional technician.
And step S103, judging whether the experimental operation action is a standard experimental operation action based on the standard experimental operation action, and obtaining a judging result.
Specifically, whether the experimental operation action is the standard experimental operation action can be determined through the standard experimental operation action and the similarity of the experimental operation actions. The specific implementation manner may include: the electronic equipment compares the experimental operation action with the standard experimental operation action, determines the similarity of the experimental operation action and the preset standard experimental operation action through comparison, judges whether the similarity is larger than a second preset similarity threshold value, and further can obtain a judging result.
When the similarity of the experimental operation action and the standard experimental operation action is larger than a second preset similarity threshold, the experimental operation action is not obvious and irregular, and the safety is high, so that no safety accident exists. Meanwhile, the electronic equipment can record and store the face image and the image of the experimental operation action in real time. If the similarity between the experimental operation action and the standard experimental operation action is not greater than the second preset similarity threshold, step S104 is executed, which indicates that the experimental operation action has obvious irregular phenomenon and probability of causing safety accidents. The embodiment of the application does not limit the second preset similarity threshold, and may be 90%, or 95%, or 99%, which may be set by the user according to the requirements.
And step S104, if the judgment result is that the experiment operation action is not standard, determining the identity information of the experiment operator based on the face image, and generating a standard reminding signal according to the identity information of the experiment operator and the judgment result so as to remind the experiment operator.
If the judgment result is not the standard experiment operation action, the high risk exists in the experiment operation action of the experiment operator, so that the identity information of the experiment operator needs to be identified for reminding.
Specifically, the identity information of the experiment operator can be determined according to the mode of the face image and the iris image or the mode of the three-dimensional model of the face.
The method for identifying the identity information of the experiment operator based on the face image and the iris image can comprise the following steps: extracting iris images of the experiment operators based on the face images, and performing identity verification based on the iris images of the experiment operators to determine first identity information of the experiment operators; extracting face characteristic region information based on the face image, and determining second identity information of the experiment operator based on the face characteristic region information; the identity information of the experiment operator is determined based on the first identity information of the experiment operator and the second identity information of the experiment operator.
The method for identifying the identity information of the experiment operator based on the face three-dimensional model can comprise the following steps: generating a three-dimensional face model by using the acquired multi-frame face images with different angles through three-dimensional modeling software, matching the face model with face information in an experimental operator face information database, and further determining identity information of the experimental operator according to a matching result, wherein the face information database is pre-established and stored in electronic equipment, comprises a plurality of face images and identity information corresponding to the face images, and the identity information can comprise: the name, sex, age, cell phone number of the experimental operator.
The standard reminding signal can remind the experiment operator through the mobile phone number of the experiment operator in a short message reminding or voice phone mode.
Based on the above embodiment, by acquiring the face image of the experiment operator and the image of the experiment operation action, real-time monitoring of the operation action of each experiment operator can be realized; according to the category to which the experimental operation action belongs, the standard experimental operation action is obtained, and according to the category to which the experimental operation action belongs and the standard operation action, whether the experimental operation action is standard or not can be accurately judged, and under the condition that the experimental operation action is determined to be the non-standard experimental operation action, a standard reminding signal is generated according to the identity information of the experimental operator and the judging result. Through carrying out face identification, whether experiment operation action is standard and normative warning signal to experiment operator, can realize the normative to every experiment operator and experiment operation action, further reduce the incidence of incident.
Further, in the embodiment of the present application, the category to which the experimental operation action belongs is identified based on the images of all the experimental operation actions, including step SA1 to step SA4 (not shown in the drawings), in which:
Step SA1, determining the mobile limb of the experiment operator based on all images of the experiment operation action.
Specifically, all images of the experimental operation actions are identified, and if the same limb position in the current frame of experimental operation action image and the next frame of experimental operation action image is changed, the mobile limb of the experimental operator can be determined. It will be appreciated that the legs and feet are not involved in performing the experimental operation. Thus when the electronic device determines that the mobile limb of the test operator is a leg or foot, then the mobile limb of the test operator is redetermined.
And step SA2, determining position information corresponding to a plurality of key points of the mobile limb in each image of the experimental operation action based on the mobile limb of the experimental operator.
Specifically, when determining that the mobile limb of the experiment operator is a hand, the key points of the corresponding mobile limb may include: a finger or wrist; when the mobile limb is a head, then the key points of the mobile limb may include: nose or eyes. The position information corresponding to each of the key points is the spatial three-dimensional position corresponding to each of the key points in each frame of experimental operation action image. And establishing a space three-dimensional coordinate system according to the images of the experimental operation actions acquired at a plurality of different angles, so that the positions of the key points in each frame of images in the space three-dimensional coordinate system can be obtained. In each of the experimental operation action images of different frames, the origin of coordinates of the spatial three-dimensional coordinate system is not changed, the origin of coordinates of the spatial three-dimensional coordinate system is not limited, a certain point of the body of the experimental operator can be used as the origin of coordinates, a certain point in the experimental operation area can also be used as the origin of coordinates, and the user can set the experimental operation action images according to the actual user.
And step SA3, extracting a plurality of experimental operation equipment images from all the images of the experimental operation actions, wherein each experimental operation equipment image comprises a plurality of experimental operation equipment.
Specifically, for example, when an experiment operator performs a filtering operation, the corresponding experimental operation equipment is a funnel, a beaker, a glass rod, and a iron stand. Several experimental operation device images can be extracted from all images of experimental operation actions through an experimental operation device extraction model, wherein the experimental operation device extraction model is obtained by training a convolutional neural network.
And step SA4, determining the movement track of the key points according to the key point position information of all the images of the experimental operation action aiming at each key point.
Specifically, when the mobile limb of the experiment operator moves, the position information of all key points of the mobile limb also changes. And further, a curve of each key point in a space three-dimensional coordinate system, namely a movement track corresponding to the key point, can be generated according to the positions of the key points of different movable limbs.
And step SA5, determining the category of the experimental operation action based on the movement tracks of all key points of the experimental operator and the images of the plurality of experimental operation devices.
Specifically, the distance information between the target experimental operation equipment and the key points and the experimental operation equipment can be determined, or the corresponding relation between the movement track of the key points and the images of the experimental operation equipment can be determined.
One achievable way of determining the category to which the experimental operation action belongs based on the target experimental operation equipment and the distance information between the key point and the experimental operation equipment specifically may include: determining a plurality of initially belonging categories of experimental operation actions based on the movement tracks of all key points of the experimental operator; acquiring distance information between each key point of a mobile limb of an experiment operator and each experiment operation device in each image of the experiment operation action; performing equipment identification according to all the distance information and a plurality of experimental operation equipment images to determine target experimental operation equipment; the category to which the experimental operation action belongs is determined based on the target experimental operation device and several initial categories to which the experimental operation action belongs.
Based on another realizable mode, the method can comprise: determining all experimental operation equipment based on the experimental operation equipment images, determining the category to which the experimental operation action corresponding to each experimental operation equipment belongs according to the experimental operation equipment, acquiring the preset moving tracks of all key points corresponding to the category to which each experimental operation equipment belongs, judging whether the change trend of the moving tracks of the key points is the same as the change trend of the preset moving tracks, and if so, determining the category to which the experimental operation action belongs. The preset moving track is input into the electronic equipment in advance.
Based on the embodiment, the position information of the moving limb of the experiment operator and a plurality of key points of the moving limb is determined through all images of the experiment operation action, so that the moving track of the key points can be accurately judged; meanwhile, a plurality of experimental operation equipment images are extracted from all images of the experimental operation actions, so that accurate judgment of the category of the experimental operation actions can be achieved from the moving track of all key points of an experimental operator and two dimensions of the plurality of experimental operation equipment images, and the behavior of the experimental operator can be further accurately judged through judgment of the category of the experimental operation actions.
Further, in the embodiment of the present application, the category to which the experimental operation action belongs is determined based on the movement trajectories of all the key points of the experimental operator and the several experimental operation device images, including step SB 1-step SB4 (not shown in the drawings), in which:
step SB1, determining a plurality of initially belonging categories of the experimental operation action based on the movement tracks of all key points of the experimental operator.
Specifically, the movement track of the key point is formed by a route that the key point passes from the starting position to the ending position when the key point performs corresponding experiment operation by an experiment operator, and has a spatial characteristic. By calculating the two-dimensional angle from the starting position to the ending position in the movement track, the movement amplitude of each key point of the moving limb can be determined, and whether the movement amplitude of each key point is larger than a preset movement amplitude threshold value or not is judged. For example, when it is determined that the movement amplitude of the finger is greater than the preset movement amplitude threshold, it may be determined that the plurality of initial categories of the experimental operation actions corresponding to the finger, that is, the plurality of initial categories of the experimental operation actions corresponding to the finger may be dripping or stirring.
Step SB2, obtaining distance information between each key point of the mobile limb of the experiment operator and each experiment operation device in each image of the experiment operation action.
Specifically, the distance information between each key point of the mobile limb and each experimental operation device is as follows: the euclidean distance between the key point of the moving limb and each experimental operation device in the three-dimensional space, and the distance information between each key point of the moving limb and each experimental operation device may be the distance between the key point in the image of any frame of experimental operation action and each experimental operation device, or the distance between the key point in the image of all frames of experimental operation action and each experimental operation device. Further, distance information of the key point and each experimental operation device can be obtained through the three-dimensional space coordinate system, the position of the key point in the space three-dimensional coordinate system and the position of the experimental operation device in the three-dimensional space coordinate system.
And step SB3, performing equipment identification according to all the distance information and a plurality of experimental operation equipment images to determine target experimental operation equipment.
Specifically, the device identification can be performed by the number of experimental operation devices, or the occurrence frequency of the experimental operation devices.
The method for identifying the equipment based on the number of experimental operation equipment can comprise the following steps: judging whether the distance between each key point of the mobile limb and each experimental operation device is not more than a preset distance threshold value or not based on all the distance information; matching experimental operation equipment images corresponding to the fact that the distance between each key point of the mobile limb and each experimental operation equipment is larger than a preset distance threshold value from a plurality of experimental operation equipment images when the distance between each key point of the mobile limb and each experimental operation equipment is not larger than the preset distance threshold value; and identifying the corresponding experimental operation equipment image so as to determine the target experimental operation equipment.
The method for identifying the equipment based on the occurrence frequency of the experimental operation equipment comprises the following steps: extracting experimental operation equipment from each experimental operation equipment image, calculating the occurrence frequency of each experimental operation equipment, and screening out an experimental operation equipment set based on the occurrence frequency of the equipment and a preset frequency threshold value, namely determining that the experimental operation equipment can generate the experimental operation equipment set when the occurrence frequency of the experimental operation equipment is larger than the preset frequency threshold value; and acquiring the key point distance between the experimental operation equipment and the mobile limb in the experimental operation equipment set from all the distance information, and taking the corresponding experimental operation equipment with the shortest distance between the mobile limb and the experimental operation equipment as target experimental operation equipment. The preset frequency threshold is not limited, and a user can set the preset frequency threshold by himself.
Step SB4, determining the category of the experimental operation action based on the target experimental operation equipment and the categories of the initial experimental operation actions.
Specifically, the electronic device may match, according to the identified target experimental operation device, a corresponding name from the name database, match, by using a semantic analysis method, the name of the target experimental operation device with a plurality of initially belonging categories, and when the degree of matching of the names is greater than a preset threshold of degree of matching of the names, determine the belonging category of the experimental operation action. The preset name matching degree threshold is not limited in the embodiment of the application.
Based on the above embodiment, the categories to which all the key points of the experiment operator initially belong can be determined through the movement track of all the key points of the experiment operator, so that the category range to which the experiment operation action belongs can be narrowed, meanwhile, the target experiment operation equipment can be determined through the distance information between each key point of the moving limb and each experiment operation equipment, the images of the experiment operation equipment and the equipment identification rules set in advance, the category to which the experiment operation action belongs can be determined from the categories to which the initial points belong based on the angle of the target experiment operation equipment, the determination of the category to which the experiment operation action belongs can be realized only through the target experiment operation equipment, and the identification efficiency of the category to which the experiment operation action belongs is effectively improved.
Further, in the embodiment of the present application, the category to which the experimental operation action belongs is determined based on the movement track of all the key points of the experimental operator and the images of the several experimental operation devices, including steps SC1-SC3 (not shown in the drawings), wherein:
and step SC1, judging whether the distance between each key point of the mobile limb and each experimental operation device is not more than a preset distance threshold value or not based on all the distance information.
And step SC2, matching experimental operation equipment images corresponding to the fact that the distance between each key point of the mobile limb and each experimental operation equipment is larger than a preset distance threshold value from a plurality of experimental operation equipment images when the distance between each key point of the mobile limb and each experimental operation equipment is not larger than the preset distance threshold value.
And step SC3, identifying the corresponding experimental operation equipment image so as to determine the target experimental operation equipment.
Specifically, all the distance information is the distance information between all key points of the mobile limbs of a plurality of experiment operators and each experiment operation device. When the distance between each key point of the mobile limb and each experimental operation device is not more than the preset distance threshold, the electronic device can screen out the corresponding experimental operation device image of which the distance between each key point of the mobile limb and each experimental operation device is not more than the preset distance threshold from a plurality of experimental operation device images, and the corresponding experimental operation device image is the target experimental operation device image, so that the target experimental operation device can be determined through image identification. The embodiment of the application does not limit the preset distance threshold, and the user can set the distance threshold by himself.
Based on the above embodiment, the experimental operation equipment images can be screened out according to the distance information and the preset distance threshold, and the screening range can be further reduced from the number of the experimental operation equipment and the dimension of the experimental operation equipment images, and meanwhile, different identification modes can be adopted according to the number of the experimental operation equipment, so that the identification accuracy is improved.
Further, in the embodiment of the present application, the identity information of the experimental operator is determined based on the face image, including step SD 1-step SD3 (not shown in the drawings), wherein:
and step SD1, extracting iris images of the experiment operators based on the face images, and performing identity verification based on the iris images of the experiment operators to determine first identity information of the experiment operators.
Specifically, the obtained eye image can be preprocessed by obtaining the obtained human face infrared image and obtaining the eye image of the experiment operator according to the infrared image, and the sharpness of the eye image can be improved by preprocessing, wherein the preprocessing mode can comprise: image smoothing and edge detection, and extracting data in the iris image by using a preset algorithm, wherein a Bowman core algorithm is preferably used in the embodiment of the application to improve the calculation speed. And comparing the extracted data with the data in the iris information base, and determining the first identity information of the experiment operator through comparison. The iris information base is input into the electronic equipment in advance, and identity information corresponding to the iris image is stored in the iris information base.
And step SD2, extracting face characteristic region information based on the face image, and determining second identity information of the experiment operator based on the face characteristic region information.
Specifically, the face feature region includes at least three of an eye region, a lip region, an eyebrow region, a nose region, and a jaw region. The obtained face images with different angles are extracted from the face feature areas by using a preset algorithm, wherein the preset algorithm is not limited in the embodiment of the application, and the face feature areas can be a HoG algorithm, a Dlib algorithm or a convolutional neural network algorithm. Comparing the extracted arbitrary plurality of face feature areas with corresponding face feature areas in a face feature information base to obtain comparison similarity, and determining second identity information of an experiment operator when the comparison similarity is larger than a third comparison similarity threshold, wherein in the embodiment of the application, the third comparison similarity threshold is preferably set to be 95%, and accuracy of the obtained second identity information is further improved.
And step SD3, determining the identity information of the experiment operator based on the first identity information of the experiment operator and the second identity information of the experiment operator.
Specifically, the electronic device compares the acquired first identity information with the second identity information to determine whether the first identity information and the second identity information are the same. If the first identity information is the same as the second identity information, the identity information of the experiment operator can be determined; if the first identity information and the second identity information are different, the electronic equipment compares the first identity information and the second identity information based on the face characteristic areas again, and if a plurality of pieces of second identity information obtained based on the face characteristic area comparison are the same, the second identity information is determined to be the identity information of the experiment operator.
Based on the embodiment, the first identity information of the experiment operator is determined through the iris image in the face image, the second identity information of the experiment operator is determined through the face characteristic area information, and the accurate positioning of the identity information of the experiment operator is realized based on the first identity information and the second identity information.
Further, in the embodiment of the present application, before acquiring the face image of the experiment operator and the image of the experimental operation action, steps SE1-SE4 (not shown in the drawings) are further included, where:
step SE1, identifying dressing information of an experiment operator when the experiment operator does not enter an experiment operation area, wherein the dressing information comprises the following steps: the clothing and footwear of the test operator wear information, the hair style information of the test operator, and the hand information of the test operator.
And step SE2, judging whether the dressing of the experiment operator accords with the wearing specification of the experiment operation area based on the dressing information of the experiment operator.
Specifically, a camera shooting and collecting device is also arranged outside the experimental operation area and used for collecting whole body images of the experimental operator which does not enter the experimental operation area, and the dressing information of the experimental operator is identified through the whole body images. The electronic equipment recognizes the whole body image acquired by the camera acquisition device, wherein the clothes and shoe ornament wearing information of the experiment operator comprises: whether the experiment clothes are worn and whether the shoes are slippers or not is judged by wearing the experiment clothes correctly; the hair style information of the experimental operator includes: whether the test cap is worn or not, and whether the hair length exceeds the earlobe or not; the hand information of the experimental operator includes: the length of the nail of the operator is tested, whether the nail of the hand is coated with other chemical reagents, and whether the hand wears jewelry. The wearing specifications of the experimental operation area are as follows: the experiment operator wears the experiment clothes and correctly wears the experiment clothes, the shoes and the decorations are not slippers, the experiment caps are worn, the hair length is not longer than the earlobe, the nail length is not longer than the specified length, the fingernails of the experiment operator are not coated with other chemical reagents, and the hands of the experiment operator are not coated with jewelry.
And step SE3, if the result is met, prompting an experiment operator to enter an experiment operation area.
Step SE4, if the wearing non-standard prompt signals are not met, sending out wearing non-standard prompt signals
Specifically, if the electronic device judges that the wearing information of the experiment operator has any phenomenon that the wearing information does not meet the wearing specification of the experiment operation area, the wearing information of the experiment operator is determined to be not in compliance with the wearing specification, and in order to reduce the occurrence rate of safety accidents, a wearing non-specification reminding signal can be sent in a voice reminding mode or in an alarm warning sound mode; and if the dressing information of the experiment operator meets the wearing standards of the experiment operation area, prompting the experiment operator to carry out the experiment operation area.
Based on the above embodiment, when the experiment operator does not enter the experiment operation area, the dressing of the experiment operator can be comprehensively identified by the clothes, the shoe decoration wearing information, the hair style information and the hand information of the experiment operator, and the probability that the experiment operator enters the experiment operation area under the condition that the dressing is not standard can be reduced by identifying the dressing information of the experiment operator, meanwhile, the probability that the safety accident occurs due to the fact that the dressing is not standard can be reduced, and the safety of the experiment operation is further improved.
Further, in the embodiment of the present application, after acquiring the face image of the experiment operator and the image of the experiment operation action, the method further includes:
preprocessing the face image of the experiment operator and the experiment operation action image, wherein the preprocessing mode comprises the following steps: graying treatment and mean value removal;
correspondingly, identifying the category to which the first experimental operation action belongs based on the images of all the experimental operation actions, and matching the corresponding preset standard experimental operation actions according to the category to which the first experimental operation action belongs, wherein the method comprises the following steps:
and identifying the category to which the first experimental operation action belongs based on the images of all the preprocessed experimental operation actions, and matching the corresponding preset standard experimental operation actions according to the category of the first experimental operation action.
Specifically, R, G, B components of face images of experimental operators and experimental operation action images are obtained, weights of three channels are set, weighted average calculation is carried out on the weighted average components to obtain gray values of each pixel point, the face images and the experimental operation action images subjected to gray treatment are subjected to mean value removal treatment, and the face edge characteristics of the experimental operators and the edge characteristics corresponding to the experimental operation actions can be better highlighted through the mean value removal treatment.
Based on the above embodiment, after the face image of the experiment operator and the image of the experiment operation action are acquired, the image is processed, the image processing speed is improved by carrying out gray processing on the image, the image definition can be improved by image scaling, the characteristics of the experiment operation action can be highlighted by using the mean value removal, and the recognition efficiency is effectively improved.
The above embodiment describes a real-time face recognition monitoring behavior method from the aspect of a method flow, and the following embodiment describes a real-time face recognition monitoring behavior device from the aspect of a virtual module or a virtual unit, specifically the following embodiment.
The embodiment of the application provides a real-time face recognition monitoring behavior device, as shown in fig. 3, the real-time face recognition monitoring behavior device may specifically include:
the image acquisition module 210 is configured to acquire a face image of an experiment operator and an image of an experiment operation action, where the face image and the image of the experiment operation action are acquired by a plurality of camera acquisition devices;
the belonging category identifying module 220 is configured to identify a belonging category of the experimental operation action based on the images of all the experimental operation actions, and match a corresponding preset standard experimental operation action according to the belonging category of the experimental operation action;
The judging module 230 is configured to judge whether the experimental operation action is a standard experimental operation action based on a preset standard experimental operation action, obtain a judging result, and trigger the reminding module 240 if the judging result is an irregular experimental operation action;
the reminding module 240 is configured to determine identity information of the experiment operator based on the face image, and generate a standard reminding signal according to the identity information of the experiment operator and the judgment result, so as to remind the experiment operator.
For the embodiment of the application, the real-time monitoring of the operation behaviors of each experiment operator can be realized by acquiring the face images and the images of the experiment operation actions of the experiment operator; according to the category to which the experimental operation action belongs, the standard experimental operation action is obtained, and according to the category to which the experimental operation action belongs and the standard operation action, whether the experimental operation action is standard or not can be accurately judged, and under the condition that the experimental operation action is determined to be the non-standard experimental operation action, a standard reminding signal is generated according to the identity information of the experimental operator and the judging result. Through carrying out face identification, whether experiment operation action is standard and standard warning signal to experiment operator, can realize the standard to every experiment operator and experiment operation action, can also reduce the incidence of incident through standard operation action simultaneously.
In one possible implementation manner of the embodiment of the present application, the category identifying module 220 that the experimental operation action belongs to is configured to, when executing the image identification of the category of the experimental operation action based on all the experimental operation actions:
determining a mobile limb of the experiment operator based on all images of the experimental operation actions;
determining position information corresponding to a plurality of key points of the mobile limb in each image of the experimental operation action based on the mobile limb of the experimental operator;
extracting a plurality of experimental operation equipment images from all images of experimental operation actions, wherein each experimental operation equipment image comprises a plurality of experimental operation equipment;
determining the moving track of the key points according to the key point position information of all the images of the experimental operation action aiming at each key point;
the category of the experimental operation action is determined based on the movement tracks of all key points of the experimental operator and the images of the plurality of experimental operation devices.
In one possible implementation manner of the embodiment of the present application, the category identifying module 220 is configured to, when determining the category of the experimental operation action based on the movement track of all the key points of the experimental operator and the images of the plurality of experimental operation devices,:
Determining a plurality of initially belonging categories of experimental operation actions based on the movement tracks of all key points of the experimental operator;
acquiring distance information between each key point of a mobile limb of an experiment operator and each experiment operation device in each image of the experiment operation action;
performing equipment identification according to all distance information, a plurality of experimental operation equipment images and preset equipment identification rules to determine target experimental operation equipment;
the category to which the experimental operation action belongs is determined based on the target experimental operation device and several initial categories to which the experimental operation action belongs.
In one possible implementation manner of the embodiment of the present application, when performing device identification according to all distance information and a plurality of experimental operation device images to determine a target experimental operation device, the category identification module 220 is configured to:
judging whether the distance between each key point of the mobile limb and each experimental operation device is not more than a preset distance threshold value or not based on all the distance information;
matching experimental operation equipment images corresponding to the fact that the distance between each key point of the mobile limb and each experimental operation equipment is larger than a preset distance threshold value from a plurality of experimental operation equipment images when the distance between each key point of the mobile limb and each experimental operation equipment is not larger than the preset distance threshold value;
And identifying the corresponding experimental operation equipment image so as to determine the target experimental operation equipment.
In one possible implementation manner of the embodiment of the present application, the reminding module 240 is configured to, when performing the determination of identity information of the experiment operator based on the face image:
extracting iris images of the experiment operators based on the face images, and performing identity verification based on the iris images of the experiment operators to determine first identity information of the experiment operators;
extracting face characteristic region information based on the face image, and determining second identity information of the experiment operator based on the face characteristic region information;
the identity information of the experiment operator is determined based on the first identity information of the experiment operator and the second identity information of the experiment operator.
In one possible implementation manner of the embodiment of the present application, the real-time face recognition monitoring behavior device further includes:
the dressing information identification module is used for:
identifying dressing information of the experiment operator when the experiment operator does not enter the experiment operation area, wherein the dressing information comprises: clothing and shoe ornament wearing information of the experiment operator, hairstyle information of the experiment operator and hand information of the experiment operator;
Judging whether dressing of the experiment operator accords with wearing specifications of the experiment operation area based on dressing information of the experiment operator;
if yes, prompting an experiment operator to enter an experiment operation area;
if the wearing state is not met, a wearing non-standard prompt signal is sent out.
In one possible implementation manner of the embodiment of the present application, the real-time face recognition monitoring behavior device further includes:
an image preprocessing module for:
preprocessing the face image of the experiment operator and the experimental operation action image, wherein the processing mode comprises the following steps: graying treatment and mean value removal;
accordingly, the category identifying module 220 identifies the category to which the first experimental operation action belongs when performing the image based on all the experimental operation actions, and matches the corresponding preset standard experimental operation actions according to the category to which the first experimental operation action belongs, including:
and identifying a first experimental operation action category based on the images of all the preprocessed experimental operation actions, and matching the corresponding preset standard experimental operation actions according to the first experimental operation action category.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, a specific working process of the real-time face recognition monitoring behavior device described above may refer to a corresponding process in the foregoing method embodiment, which is not described herein again.
The following describes an electronic device provided in the embodiments of the present application, where the electronic device described below and the real-time face recognition monitoring behavior method described above may be referred to correspondingly.
An embodiment of the present application provides an electronic device, as shown in fig. 3, fig. 3 is a schematic structural diagram of the electronic device provided in the embodiment of the present application, and an electronic device 300 shown in fig. 3 includes: a processor 301 and a memory 303. Wherein the processor 301 is coupled to the memory 303, such as via a bus 302. Optionally, the electronic device 300 may also include a transceiver 304. It should be noted that, in practical applications, the transceiver 304 is not limited to one, and the structure of the electronic device 300 is not limited to the embodiment of the present application.
The processor 301 may be a CPU (Central Processin gUnit ), general purpose processor, DSP (Digital Signa lProcessor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable GateArray, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with the disclosure of embodiments of the present application. Processor 301 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 302 may include a path to transfer information between the components. Bus 302 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect Standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. Bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 3, but not only one bus or one type of bus.
The Memory 303 may be, but is not limited to, a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory ), a CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 303 is used for storing application program codes for executing embodiments of the present application, and is controlled to be executed by the processor 301. The processor 301 is configured to execute the application code stored in the memory 303 to implement what is shown in the foregoing method embodiments.
Among them, electronic devices include, but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments herein.
A computer readable storage medium provided in the embodiments of the present application will be described below, where a computer program is stored on the computer readable storage medium, and when the computer program runs on a computer, the computer program makes the computer execute the corresponding content in the foregoing method embodiments. Compared with the related art, the method and the device can realize real-time monitoring of the operation behaviors of each experiment operator by acquiring the face images and the images of the experiment operation actions of the experiment operator; according to the category of the experimental operation action, a preset standard experimental operation action is obtained, whether the experimental operation action is standard or not can be accurately judged according to the category of the experimental operation action and the standard operation action, and a standard reminding signal is generated according to the identity information of an experimental operator and a judging result under the condition that the experimental operation action is determined to be the non-standard experimental operation action. Through carrying out face identification, whether experiment operation action is standard and standard warning signal to experiment operator, can realize the standard to every experiment operator and experiment operation action, can also reduce the incidence of incident through standard operation action simultaneously.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. The real-time face recognition monitoring behavior method is characterized by comprising the following steps of:
acquiring face images and experimental operation action images of an experiment operator, wherein the face images and the experimental operation action images are acquired by a plurality of camera acquisition devices;
Identifying the category of the experimental operation action based on the images of all the experimental operation actions, and matching the corresponding standard experimental operation action according to the category of the experimental operation action;
judging whether the experimental operation action is a standard experimental operation action or not based on a standard experimental operation action to obtain a judging result;
if the judging result is that the experiment operation action is not standard, the identity information of the experiment operator is determined based on the face image, and a standard reminding signal is generated according to the identity information of the experiment operator and the judging result so as to remind the experiment operator.
2. The method for monitoring behavior based on real-time face recognition according to claim 1, wherein the image recognition of the category of the experimental operation action based on all the experimental operation actions comprises:
determining a mobile limb of the experiment operator based on all images of the experimental operation actions;
determining position information corresponding to a plurality of key points of the mobile limb in each image of the experimental operation action based on the mobile limb of the experimental operator;
extracting a plurality of experimental operation equipment images from all images of experimental operation actions, wherein each experimental operation equipment image comprises a plurality of experimental operation equipment;
Determining the moving track of the key points according to the key point position information of all the images of the experimental operation action aiming at each key point;
determining the category of the experimental operation action based on the moving track of all key points of the experimental operator and a plurality of experimental operation equipment images.
3. The method for real-time face recognition monitoring behavior according to claim 2, wherein the determining the category of the experimental operation action based on the moving trajectories of all key points of the experimental operator and the images of the plurality of experimental operation devices comprises:
determining a plurality of initially belonging categories of experimental operation actions based on the movement tracks of all key points of the experimental operator;
acquiring distance information between each key point of a mobile limb of an experiment operator and each experiment operation device in each image of the experiment operation action;
performing equipment identification according to all the distance information and a plurality of experimental operation equipment images to determine target experimental operation equipment;
determining the category of the experimental operation action based on the target experimental operation equipment and a plurality of initial categories of the experimental operation action.
4. A real-time face recognition monitoring behavior method according to claim 3, wherein the performing device recognition according to all distance information and a plurality of experimental operation device images to determine a target experimental operation device comprises:
Judging whether the distance between each key point of the mobile limb and each experimental operation device is not more than a preset distance threshold value or not based on all the distance information;
matching experimental operation equipment images corresponding to the fact that the distance between each key point of the mobile limb and each experimental operation equipment is larger than a preset distance threshold value from a plurality of experimental operation equipment images when the distance between each key point of the mobile limb and each experimental operation equipment is not larger than the preset distance threshold value;
and identifying the corresponding experimental operation equipment image to determine target experimental operation equipment.
5. The method for real-time face recognition monitoring behavior according to claim 1, wherein the determining the identity information of the experiment operator based on the face image comprises:
extracting iris images of the experiment operators based on the face images, and performing identity verification based on the iris images of the experiment operators to determine first identity information of the experiment operators;
extracting face characteristic region information based on the face image, and determining second identity information of an experiment operator based on the face characteristic region information;
and determining the identity information of the experiment operator based on the first identity information of the experiment operator and the second identity information of the experiment operator.
6. The method for monitoring and controlling the behavior according to claim 1, wherein before the step of obtaining the face image of the experiment operator and the image of the experimental operation action, the method further comprises:
identifying dressing information of an experiment operator when the experiment operator does not enter an experiment operation area, wherein the dressing information comprises: clothing and shoe ornament wearing information of the experiment operator, hairstyle information of the experiment operator and hand information of the experiment operator;
judging whether the dressing of the experiment operator accords with the wearing specification of the experiment operation area or not based on the dressing information of the experiment operator;
if yes, prompting an experiment operator to enter an experiment operation area;
if the wearing state is not met, a wearing non-standard prompt signal is sent out.
7. The method for monitoring behavior according to any one of claims 1 to 5, wherein after acquiring the face image of the experimental operator and the image of the experimental operation action, further comprises:
preprocessing the face image and the experimental operation action image of the experimental operator, wherein the processing mode comprises the following steps: graying treatment and mean value removal;
correspondingly, identifying the category to which the first experimental operation action belongs based on the images of all the experimental operation actions, and matching the corresponding preset standard experimental operation actions according to the category to which the first experimental operation action belongs, wherein the method comprises the following steps:
And identifying the category to which the first experimental operation action belongs based on the images of all the preprocessed experimental operation actions, and matching the corresponding preset standard experimental operation actions according to the category to which the first experimental operation action belongs.
8. A real-time face recognition monitoring behavior device, comprising:
the image acquisition module is used for acquiring face images of an experiment operator and images of experiment operation actions, wherein the face images and the images of the experiment operation actions are acquired by a plurality of camera acquisition devices;
the category identification module is used for identifying the category of the experimental operation action based on the images of all the experimental operation actions and matching the corresponding standard experimental operation action according to the category of the experimental operation action;
the judging module is used for judging whether the experimental operation action is a standard experimental operation action or not based on the standard experimental operation action to obtain a judging result, and triggering the reminding module if the judging result is an irregular experimental operation action;
and the reminding module is used for determining the identity information of the experiment operator based on the face image and generating a standard reminding signal according to the identity information of the experiment operator and the judging result so as to remind the experiment operator.
9. An electronic device, comprising:
at least one processor;
a memory;
at least one application program, wherein the at least one application program is stored in the memory and configured to be executed by the at least one processor, the at least one application program configured to: a method of performing real-time face recognition monitoring behaviour as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed in a computer, causes the computer to perform the real-time face recognition monitoring behavior method of any one of claims 1 to 7.
CN202310233484.3A 2023-03-02 2023-03-02 Real-time face recognition monitoring behavior method, device, equipment and medium Pending CN116434296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310233484.3A CN116434296A (en) 2023-03-02 2023-03-02 Real-time face recognition monitoring behavior method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310233484.3A CN116434296A (en) 2023-03-02 2023-03-02 Real-time face recognition monitoring behavior method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116434296A true CN116434296A (en) 2023-07-14

Family

ID=87084510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310233484.3A Pending CN116434296A (en) 2023-03-02 2023-03-02 Real-time face recognition monitoring behavior method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116434296A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN209590934U (en) * 2019-01-21 2019-11-05 淮北师范大学 A kind of laboratory dressing control system lack of standardization
CN111325128A (en) * 2020-02-13 2020-06-23 上海眼控科技股份有限公司 Illegal operation detection method and device, computer equipment and storage medium
CN112016363A (en) * 2019-05-30 2020-12-01 富泰华工业(深圳)有限公司 Personnel monitoring method and device, computer device and readable storage medium
US20210319213A1 (en) * 2020-04-09 2021-10-14 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for prompting motion, electronic device and storage medium
CN113516064A (en) * 2021-07-02 2021-10-19 深圳市悦动天下科技有限公司 Method, device, equipment and storage medium for judging sports motion
CN113627409A (en) * 2021-10-13 2021-11-09 南通力人健身器材有限公司 Body-building action recognition monitoring method and system
CN114187561A (en) * 2021-11-30 2022-03-15 广西世纪创新显示电子有限公司 Abnormal behavior identification method and device, terminal equipment and storage medium
CN115482502A (en) * 2022-08-31 2022-12-16 广东电网有限责任公司广州供电局 Abnormal behavior identification method, system and medium based on characteristic object and human body key point
CN115482485A (en) * 2022-09-05 2022-12-16 四川大学华西医院 Video processing method and device, computer equipment and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN209590934U (en) * 2019-01-21 2019-11-05 淮北师范大学 A kind of laboratory dressing control system lack of standardization
CN112016363A (en) * 2019-05-30 2020-12-01 富泰华工业(深圳)有限公司 Personnel monitoring method and device, computer device and readable storage medium
CN111325128A (en) * 2020-02-13 2020-06-23 上海眼控科技股份有限公司 Illegal operation detection method and device, computer equipment and storage medium
US20210319213A1 (en) * 2020-04-09 2021-10-14 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for prompting motion, electronic device and storage medium
CN113516064A (en) * 2021-07-02 2021-10-19 深圳市悦动天下科技有限公司 Method, device, equipment and storage medium for judging sports motion
CN113627409A (en) * 2021-10-13 2021-11-09 南通力人健身器材有限公司 Body-building action recognition monitoring method and system
CN114187561A (en) * 2021-11-30 2022-03-15 广西世纪创新显示电子有限公司 Abnormal behavior identification method and device, terminal equipment and storage medium
CN115482502A (en) * 2022-08-31 2022-12-16 广东电网有限责任公司广州供电局 Abnormal behavior identification method, system and medium based on characteristic object and human body key point
CN115482485A (en) * 2022-09-05 2022-12-16 四川大学华西医院 Video processing method and device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US11227158B2 (en) Detailed eye shape model for robust biometric applications
WO2019090769A1 (en) Human face shape recognition method and apparatus, and intelligent terminal
CN105612533B (en) Living body detection method, living body detection system, and computer program product
WO2019045750A1 (en) Detailed eye shape model for robust biometric applications
CN110879995A (en) Target object detection method and device, storage medium and electronic device
JP6713057B2 (en) Mobile body control device and mobile body control program
US11682235B2 (en) Iris authentication device, iris authentication method and recording medium
CN110610127A (en) Face recognition method and device, storage medium and electronic equipment
CN111626240A (en) Face image recognition method, device and equipment and readable storage medium
CN108875549A (en) Image-recognizing method, device, system and computer storage medium
CN109711287B (en) Face acquisition method and related product
Chua et al. Vision-based hand grasping posture recognition in drinking activity
CN109992681B (en) Data fusion method and related product
CN112149527A (en) Wearable device detection method and device, electronic device and storage medium
WO2023124869A1 (en) Liveness detection method, device and apparatus, and storage medium
CN109858355B (en) Image processing method and related product
CN116434296A (en) Real-time face recognition monitoring behavior method, device, equipment and medium
CN113780255B (en) Danger assessment method, device, equipment and storage medium
CN106815264B (en) Information processing method and system
CN109472195A (en) Identify the methods, devices and systems of object
JP2022019991A (en) Information processing apparatus, information processing method, and program
CN113673356A (en) Behavior recognition method, storage medium and computer device
CN113344124A (en) Trajectory analysis method and device, storage medium and system
KR20220118119A (en) Entrance control method for contaless entrance control and system performing the same
CN114359646A (en) Video analysis method, device, system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination