CN112488005B - On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion - Google Patents

On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion Download PDF

Info

Publication number
CN112488005B
CN112488005B CN202011404353.XA CN202011404353A CN112488005B CN 112488005 B CN112488005 B CN 112488005B CN 202011404353 A CN202011404353 A CN 202011404353A CN 112488005 B CN112488005 B CN 112488005B
Authority
CN
China
Prior art keywords
node
nodes
crotch
distance
front projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011404353.XA
Other languages
Chinese (zh)
Other versions
CN112488005A (en
Inventor
张庆
管绍朋
崔旭
岳涛
李奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Linyi Xinshang Network Technology Co ltd
Original Assignee
Linyi Xinshang Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Linyi Xinshang Network Technology Co ltd filed Critical Linyi Xinshang Network Technology Co ltd
Priority to CN202011404353.XA priority Critical patent/CN112488005B/en
Publication of CN112488005A publication Critical patent/CN112488005A/en
Application granted granted Critical
Publication of CN112488005B publication Critical patent/CN112488005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1091Recording time for administrative or management purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Operations Research (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)

Abstract

The invention discloses an on-duty monitoring method and system based on human skeleton recognition and multi-angle conversion, which comprises the following steps: extracting shoulder nodes, crotch nodes and tail bone nodes in the video image, and screening a non-front projection graph and a non-parallel projection graph according to the angles of connecting lines of the shoulder nodes and the crotch nodes and the distances between the crotch nodes and the tail bone nodes; respectively converting the non-front projection drawing and the non-parallel projection drawing into front projection drawings according to a transverse angle conversion method and a longitudinal distance conversion method; and recognizing by adopting a pre-trained gesture recognition model according to the bone features extracted from the front projection drawing to obtain the corresponding on-duty gesture. The non-front projection graph or the non-parallel projection graph of the video image is converted into the front projection graph through the range smoothing algorithm, the detection precision of the video image is improved from the aspect of the algorithm, more accurate posture evaluation is realized, and the dependence of a real-time on-duty monitoring system on professional equipment is reduced.

Description

On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
Technical Field
The invention relates to the technical field of intelligent on-duty monitoring, in particular to an on-duty monitoring method and system based on human skeleton recognition and multi-angle conversion.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
In recent years, as surveillance electronics have become widespread in various fields, it has been required to efficiently process a large amount of surveillance video and images; meanwhile, the attendance rate of the staff is required to be known as required by banks, stock exchanges or other businesses, and the staff arriving on time or leaving randomly needs to be monitored so as to accurately evaluate the attendance condition of the staff; in addition, in some scenes, accidents in a video monitoring area cannot be found timely due to the fact that personnel leave the post without permission, so that the accident problem cannot be timely and effectively handled, and the problems of large loss and safety are caused. Therefore, for these scenes, real-time monitoring of the on-duty situation of the human is required.
Most of the traditional methods for monitoring and checking the duty require staff to watch a camera to complete monitoring, but the monitoring method has low efficiency and cannot ensure accuracy, and missed detection or false alarm can occur when the duty is off, shifts or sleeps on duty; in addition, the method needs to be provided with a large number of cameras, and the monitoring picture is complex, so that the method is not beneficial to on-duty real-time monitoring.
Therefore, the intelligent on-duty monitoring method based on computer vision has more advantages, such as no need of special monitoring personnel, realization of 24-hour real-time monitoring, more intelligence and the like; however, the inventor finds that the intelligent on-duty monitoring method based on computer vision also has the following defects:
(1) The acquisition efficiency is low in the mode of acquiring the video image based on the binary streaming media protocol;
(2) The video image is acquired based on the traditional ffmpeg technology or a third-party framework, the problem of packet loss is easy to occur, and the video image is easy to be displayed in a scene with frequent network fluctuation;
(3) The image recognition is carried out based on the collected original image, and the image data is huge, so that the requirement on the service performance is too high, and the image data processing efficiency is low;
(4) Under the condition of multiple angles of a camera, attitude evaluation errors are caused by angle problems;
(5) Needs a large amount of professional equipment support, has high cost and is difficult to popularize.
Disclosure of Invention
In order to solve the problems, the invention provides an on-duty monitoring method and system based on human skeleton recognition and multi-angle conversion.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the invention provides an on-duty monitoring method based on human skeleton recognition and multi-angle conversion, which comprises the following steps:
extracting shoulder nodes, crotch nodes and tail bone nodes in the video image, and screening a non-front projection graph and a non-parallel projection graph according to the angles of connecting lines of the shoulder nodes and the crotch nodes and the distances between the crotch nodes and the tail bone nodes;
respectively converting the non-front projection drawing and the non-parallel projection drawing into front projection drawings according to a transverse angle conversion method and a longitudinal distance conversion method;
and recognizing by adopting a pre-trained gesture recognition model according to the bone features extracted from the front projection drawing to obtain the corresponding on-duty gesture.
In a second aspect, the present invention provides an on-duty monitoring system based on human bone recognition and multi-angle conversion, comprising:
the image processing module is used for extracting shoulder nodes, crotch nodes and tail bone nodes in the video image, and screening a non-front projection graph and a non-parallel projection graph according to the angles of connecting lines of the shoulder nodes and the crotch nodes and the distances between the crotch nodes and the tail bone nodes;
the image conversion module is used for respectively converting the non-front projection drawing and the non-parallel projection drawing into a front projection drawing according to a transverse angle conversion method and a longitudinal distance conversion method;
and the bone recognition module is used for recognizing the corresponding on-duty posture by adopting a pre-trained posture recognition model according to the bone characteristics extracted from the front projection drawing.
In a third aspect, the present invention provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein when the computer instructions are executed by the processor, the method of the first aspect is performed.
In a fourth aspect, the present invention provides a computer readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
In a fifth aspect, the present invention provides an attendance checking platform, comprising:
the service control layer is used for receiving the attendance checking instruction;
the business logic layer is used for analyzing the attendance instruction and acquiring a video image according to the attendance instruction;
and the bottom layer assembly is used for carrying out on-duty gesture recognition on the video image by adopting the method of the first aspect to obtain an on-duty detection result.
In a sixth aspect, the present invention provides an attendance checking method, including:
acquiring an attendance checking instruction;
analyzing an attendance instruction, and acquiring a video image according to the attendance instruction;
and performing on-duty gesture recognition on the video image by adopting the method of the first aspect to obtain an on-duty detection result.
Compared with the prior art, the invention has the following beneficial effects:
aiming at the problem of low video image acquisition efficiency, the invention encapsulates the ffmpeg video stream content based on the tcp reliable protocol so as to acquire the image.
Aiming at the problem of screen splash of the video image caused by network fluctuation, the invention preprocesses the video image based on the Gaussian algorithm and the gray algorithm, compresses the image data while ensuring the image quality and reduces the dependence on the network performance.
According to the method, the conventional data sample is used as a relative reference object for post attitude identification, and meanwhile, a front projection model warehouse and head type data range mapping are generated based on an artificial intelligence countermeasure network, so that an attitude identification model is constructed, and the processing efficiency of a primary video image is improved.
The non-front projection drawing or the non-parallel projection drawing of the video image is converted into the front projection drawing through the range smoothing algorithm, so that more accurate posture evaluation is realized; the detection precision of the video image is improved from the algorithm level, and the dependence of a real-time on-duty monitoring system on professional equipment is reduced.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are included to illustrate an exemplary embodiment of the invention and not to limit the invention.
FIG. 1 is a flow chart of an on-duty monitoring method based on human bone identification and multi-angle conversion according to embodiment 1 of the present invention;
FIG. 2 (a) is a side perspective view provided in example 1 of the present invention;
FIG. 2 (b) is a front perspective view provided in embodiment 1 of the present invention;
FIG. 3 (a) is a schematic view of body curvature provided in example 1 of the present invention;
fig. 3 (b) is a schematic view of the body erection provided in embodiment 1 of the present invention;
fig. 4 is a schematic diagram of an attendance platform architecture provided in embodiment 5 of the present invention;
fig. 5 is a flowchart of an attendance checking method provided in embodiment 6 of the present invention.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example 1
As shown in fig. 1, the embodiment provides an on-duty monitoring method based on human bone recognition and multi-angle conversion, which includes:
s1: extracting shoulder nodes, crotch nodes and tail bone nodes in the video image, and screening a non-front projection graph and a non-parallel projection graph according to the angles of connecting lines of the shoulder nodes and the crotch nodes and the distances between the crotch nodes and the tail bone nodes;
s2: respectively converting the non-front projection drawing and the non-parallel projection drawing into front projection drawings according to a transverse angle conversion method and a longitudinal distance conversion method;
s3: and recognizing by adopting a pre-trained gesture recognition model according to the bone features extracted from the front projection drawing to obtain the corresponding on-duty gesture.
In step S1, the extracting of the video image includes:
according to the characteristic that ffmpeg supports a TCP/IP protocol, a proper network layer protocol is selected, integrity verification is carried out on the collected video image while the packet loss repair of the bottom layer is guaranteed as much as possible, and information supplement is carried out on the incomplete video image through Gaussian filtering processing of a direct convolution algorithm.
In step S1, the method further includes preprocessing the video image, including:
the method comprises the steps of carrying out geometric proportion cutting on collected video images, combining block data after two-dimensional preselection transformation is carried out, and finally carrying out picture compression, so that the dependence on network performance is reduced, and the generation of image screens is reduced.
In this embodiment, the screening the non-front projection view and the non-parallel projection view specifically includes:
s1-1: extracting shoulder nodes, crotch nodes and tail bone nodes, and connecting the two shoulder nodes, the two crotch nodes, the tail bone nodes and the left and right crotch nodes;
s1-2: judging whether the angle is a parallel angle or not through the included angles between the two shoulder joint connecting lines and the two crotch joint connecting lines and the parallel lines;
s1-3: judging whether the front projection angle is obtained or not according to the comparison result of the distance difference value between the tail bone node and the left and right hip bone nodes and the normal range value;
s1-4: if the included angles between the two shoulder joint connecting lines and the two crotch joint connecting lines and the parallel lines are smaller than a preset included angle threshold value, the included angles are regarded as parallel angles; otherwise, the angle is a non-parallel angle;
s1-5: if the comparison result of the distance difference value between the tail bone node and the left and right hip bone nodes and the normal range value is larger than a preset distance threshold value, determining that the front projection angle is not the front projection angle;
s1-6: if the video image meets the parallel angle and the non-front projection angle, the video image is a non-front projection image;
s1-7: if the video image does not meet the parallel angle, the video image is a non-parallel projection image;
specifically, if the included angle between the two shoulder node connecting lines and the parallel lines is greater than a preset included angle threshold value, and the included angle between the two crotch node connecting lines and the parallel lines is smaller than the preset included angle threshold value, the lower half is regarded as parallel, and the upper half moves longitudinally;
otherwise, the upper half body is considered to be parallel, and the lower half body moves longitudinally;
and if the included angles between the two shoulder joint connecting lines and the two crotch joint connecting lines and the parallel lines are larger than the preset included angle threshold value, the whole movement is considered.
For example, in the present embodiment, the left shoulder and right shoulder parallel line is fixed by 60-80cm, the left crotch bone and right crotch bone are fixed by 60-80cm, and the tailbone and left and right shoulder line vertical line is fixed by 60-80cm; calculating sine value ranges of 10 connecting line distances and 15 included angles of 5 nodes, wherein the range mapping of each connecting line distance f and each included angle sine value alpha is shown as the following formula:
Fn=∫°∩∫ 1 ∈[0°,180°];
Cn=sin(α);
Fn∈λ∑(1,Cn)/n。
in this embodiment, the calculating the front projection diagram by using the range smoothing algorithm for the non-front projection diagram and the non-parallel projection diagram specifically includes:
s2-1: transverse conversion: when the included angles between the connecting lines of the two shoulders and the two crotch nodes and the parallel lines are less than 10 degrees, the connecting lines are regarded as parallel angles, and when the distance difference between the nodes of the tail bones and the left and right crotch bones is more than 10% of the normal range value, the connecting lines are regarded as non-front projection angles, and the video image is a non-front projection image;
calculating the ratio of a distance difference value between a tail bone node and left and right hip bones and the ratio of a front projection conventional width to an actual angle width according to a front bilateral symmetry principle, and converting a node coordinate of a side projection into a coordinate point of a front projection graph;
as shown in fig. 2 (a) -2 (b), specifically, the lateral conversion method is:
j1+j2=j
j/(jz1+jz2)=jl;
jr=((jz1+jz2)/2)*jl;
jd=j1/jr;
k1+k2=k;
k/(kz1+kz2)=kl;
kr=((kz1+kz2)/2)*kl;
kd=k1/kr;
the rotation degree is as follows: (jd + kd)/2;
wherein j1 represents the distance between the left shoulder node and the intersection point in the front projection drawing, j2 represents the distance between the right shoulder node and the intersection point in the front projection drawing, k1 represents the distance between the left crotch node and the coccyx node in the front projection drawing, and k2 represents the distance between the right crotch node and the coccyx node in the front projection drawing;
jz1 represents the distance between the left shoulder node and the intersection point in the side projection graph, jz2 represents the distance between the right shoulder node and the intersection point in the side projection graph, kz1 represents the distance between the left crotch node and the coccyx node in the side projection graph, and kz2 represents the distance between the right crotch node and the coccyx node in the side projection graph; jr represents the shoulder radius; kr denotes the crotch radius.
Similarly, other node parameters such as body, hand, elbow, foot, knee and the like are converted in an equal ratio.
S2-2: longitudinal conversion: when the included angle between the connecting line of the two crotch nodes and the parallel line is less than 10 degrees and the included angle alpha between the connecting line of the two shoulder nodes and the parallel line is more than 10 degrees, the lower half body is considered to be parallel, the upper half body moves longitudinally, and the video image is a non-parallel projection image;
calculating the translation distance of the two shoulders through the centers of the two crotch nodes, and calculating the longitudinal movement distance through a trigonometric function, namely cosine or sine;
as shown in fig. 3 (a) -3 (b), specifically, the longitudinal conversion calculation method:
Figure GDA0003803918230000091
thus, the longitudinal transition distance is: jh/2; jx = jz; jz/kz is approximately equal to j/k;
thus, the lateral transition distance is: km × (j/k).
Where α represents the angle between line jz and jx, and km represents the horizontal distance between the midpoint of line jx and the coccyx node on line kz.
Similarly, according to local movement or overall movement, other node parameters such as hands, elbows, feet, knees and the like are converted in a linkage mode.
In this embodiment, in the step S3, constructing the gesture recognition model includes:
a training set is constructed through a known human body posture and skeleton node map, a posture recognition model marking confidence is generated through conventional data massive operation and artificial intelligence based confrontation network training and serves as a relative reference object for later posture recognition, and therefore the processing efficiency of the native video image is improved.
In step S3, the gesture recognition includes:
based on opencast technology, carrying out bone node recognition on the projection graph, and calculating matching mapping with highest confidence coefficient between a bone node recognition result G and a model library real result Gz by adopting a trained gesture recognition model:
Figure GDA0003803918230000101
according to the embodiment, the detection precision of the video image is improved from the aspect of an algorithm, and the dependence of a real-time on-duty monitoring system on professional equipment is reduced.
Example 2
The embodiment provides an on duty monitoring system based on human skeleton discernment and multi-angle conversion, includes:
the image processing module is used for extracting shoulder nodes, crotch nodes and tail bone nodes in the video image, and screening a non-front projection graph and a non-parallel projection graph according to the angles of connecting lines of the shoulder nodes and the crotch nodes and the distances between the crotch nodes and the tail bone nodes;
the image conversion module is used for respectively converting the non-front projection drawing and the non-parallel projection drawing into a front projection drawing according to a transverse angle conversion method and a longitudinal distance conversion method;
and the bone recognition module is used for recognizing the corresponding on-duty posture by adopting a pre-trained posture recognition model according to the bone characteristics extracted from the front projection drawing.
It should be noted that the above modules correspond to steps S1 to S4 in embodiment 1, and the above modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure of embodiment 1. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
Example 3
In further embodiments, there is also provided an electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, which when executed by the processor, perform the method described in embodiment 1. For brevity, no further description is provided herein.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
Example 4
In further embodiments, a computer-readable storage medium is also provided for storing computer instructions that, when executed by a processor, perform the method described in embodiment 1.
The method in embodiment 1 may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Example 5
In more embodiments, an attendance platform is further provided, wherein the modules, components and layers included in the overall architecture composition of the attendance platform are divided as shown in fig. 4 except that the bottom component identifies video image information on duty, and the division specifically includes:
the presentation layer is used for providing an interface protocol to be connected with an external terminal;
the business control layer is used for receiving the attendance instruction, setting the attendance cycle and the attendance interval time;
the business logic layer is used for analyzing the attendance checking instruction, judging whether attendance checking is needed or not, and acquiring a video image according to the attendance checking instruction if the attendance checking is needed;
the bottom layer assembly is used for carrying out on-duty gesture recognition on the video image by adopting the method in the embodiment 1 to obtain an on-duty detection result;
and the database is used for storing the on-duty detection result.
Specifically, the presentation layer provides protocols including: HTML, CSS, layui, and the like, which provide an interface with an external terminal such as a PC web site.
Specifically, the service control layer is further configured to perform access check on an accessed external terminal, such as password check, permission check, and the like.
Specifically, the database adopts an MYSQL database and stores relevant data information of the attendance checking platform, wherein the relevant data information comprises platform configuration management information, data display information, routing inspection information, log records and the like; the related data information of the attendance checking platform is received by the service control layer, analyzed by the service logic layer and then stored in the database.
Preferably, in the platform configuration management information: the role management comprises setting different role categories, such as a monitor, a monitored person and the like;
personnel management: the method comprises the following steps of specifically distributing roles of each person, wherein the roles correspond to different authorities;
basic configuration: including the time of going to and going to work, the activity area, the off duty time threshold value, etc.;
system parameters: including configuration information of the entire system such as start-up, shut-down time, image sampling frequency, buffer space size, etc.
Preferably, in the data presentation information: the convenience service station and the party activity room refer to non-working areas inside a company or outside the posts of the company and are used for reminding people who leave the post in real time.
Preferably, in the inspection module: the camera inspection aims at ensuring the normal operation of the camera for image acquisition;
the inspection of the service station and the activity room aims to provide finer-grained monitoring logic, such as distinguishing personnel who leave behind to the activity room inside the company from personnel who leave behind to the outside of the company.
Example 6
In further embodiments, an attendance method of the attendance platform described in embodiment 5 is further provided, as shown in fig. 5, including:
(1) Configuring attendance cycle and attendance interval time of an attendance task;
(2) Acquiring an attendance checking instruction;
(3) Analyzing the detection parameters in the attendance checking instruction, judging whether attendance checking is required, and if not, finishing the attendance checking; if so, acquiring a video image according to the attendance checking instruction;
(4) Performing on-duty gesture recognition on the video image by adopting the method in the embodiment 1 to obtain an on-duty detection result;
(5) And (4) storing the on-duty detection result, and returning to the step (2) again according to the interval time of the attendance checking task.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (7)

1. An on-duty monitoring method based on human skeleton recognition and multi-angle conversion is characterized by comprising the following steps:
extracting shoulder nodes, crotch nodes and tail bone nodes in the video image, and screening a non-front projection graph and a non-parallel projection graph according to intersection angles of connecting lines of the two shoulder nodes, the two crotch node connecting lines and parallel lines and distances between the crotch nodes and the tail bone nodes;
the screening non-frontal projection views and non-parallel projection views include:
connecting the two shoulder nodes, the two crotch nodes, the tail rib node and the left and right crotch nodes;
judging whether included angles between the two shoulder joint connecting lines and the two crotch joint connecting lines and the parallel lines are parallel angles or not;
judging whether the front projection angle is obtained or not according to the comparison result of the distance difference value between the tail bone node and the two crotch nodes and the normal range value;
if the video image meets the parallel angle condition and the non-front projection angle condition, the video image is a non-front projection image; if the video image does not meet the condition of the parallel angle, the video image is a non-parallel projection image;
respectively converting the non-front projection drawing and the non-parallel projection drawing into front projection drawings according to a transverse angle conversion method and a longitudinal distance conversion method;
the transverse angle conversion method comprises the steps of converting the node coordinates of a non-parallel projection graph into the node coordinates of a front projection graph according to the distance difference value ratio of a tail bone node to two crotch nodes and the ratio of the front projection conventional width to the actual projection angle width; specifically, the transverse conversion method comprises the following steps:
j1+j2=j
j/(jz1+jz2)=jl;
jr=((jz1+jz2)/2)*jl;
jd=j1/jr;
k1+k2=k;
k/(kz1+kz2)=kl;
kr=((kz1+kz2)/2)*kl;
kd=k1/kr;
the rotation degree is as follows: (jd + kd)/2;
wherein j1 represents the distance between the left shoulder node and the intersection point in the front projection drawing, j2 represents the distance between the right shoulder node and the intersection point in the front projection drawing, k1 represents the distance between the left crotch node and the coccyx node in the front projection drawing, and k2 represents the distance between the right crotch node and the coccyx node in the front projection drawing;
jz1 represents the distance between the left shoulder node and the intersection point in the side projection graph, jz2 represents the distance between the right shoulder node and the intersection point in the side projection graph, kz1 represents the distance between the left crotch node and the coccyx node in the side projection graph, and kz2 represents the distance between the right crotch node and the coccyx node in the side projection graph; jr represents the shoulder radius; kr denotes the crotch radius;
the longitudinal distance conversion method comprises the steps of calculating the translation distance of two shoulders according to the centers of two crotch nodes, and calculating the longitudinal movement distance of the two shoulders through a trigonometric function; specifically, the longitudinal transformation calculation method:
Cn=sin(α)
jh=j*Cn;
thus, the longitudinal transition distance is: jh/2; jx = jz; jz/kz is approximately equal to j/k;
thus, the lateral transition distance is: km × (j/k);
wherein jz represents the distance between a left shoulder node and a right shoulder node, kz represents the distance between a left crotch node and a right crotch node, jx represents a parallel line, alpha represents the included angle between a line jz and jx, and km represents the horizontal distance between the midpoint of the line jx and a tailbone node on the line kz;
and recognizing by adopting a pre-trained gesture recognition model according to the bone features extracted from the front projection drawing to obtain the corresponding on-duty gesture.
2. The on-the-job monitoring method based on human bone recognition and multi-angle conversion as claimed in claim 1, wherein the pre-processing of the video images comprises pre-selecting and transforming the video images cut by an equal ratio in two dimensions, combining the highly concentrated image blocks, and compressing the combined images.
3. An on-duty monitoring system based on human skeleton recognition and multi-angle conversion is characterized by comprising:
the image processing module is used for extracting shoulder nodes, crotch nodes and coccyx nodes in the video image, and screening a non-front projection graph and a non-parallel projection graph according to the intersection angles of connecting lines of the two shoulder nodes and the two crotch nodes and the parallel lines and the distances between the crotch nodes and the coccyx nodes;
the screening non-frontal projection views and non-parallel projection views comprise:
connecting the two shoulder nodes, the two crotch nodes, the tail rib node and the left and right crotch nodes;
judging whether included angles between the connecting lines of the two shoulder nodes and the connecting lines of the two crotch nodes and the parallel lines are parallel angles or not;
judging whether the front projection angle is obtained or not according to the comparison result of the distance difference value between the tail bone node and the two crotch nodes and the normal range value;
if the video image meets the parallel angle condition and the non-front projection angle condition, the video image is a non-front projection image; if the video image does not meet the parallel angle condition, the video image is a non-parallel projection image;
the image conversion module is used for respectively converting the non-front projection drawing and the non-parallel projection drawing into a front projection drawing according to a transverse angle conversion method and a longitudinal distance conversion method;
the transverse angle conversion method comprises the steps of converting the node coordinates of a non-parallel projection graph into the node coordinates of a front projection graph according to the distance difference ratio of a tailbone node to two crotch nodes and the ratio of the front projection conventional width to the actual projection angle width; specifically, the transverse conversion method comprises the following steps:
j1+j2=j
j/(jz1+jz2)=jl;
jr=((jz1+jz2)/2)*jl;
jd=j1/jr;
k1+k2=k;
k/(kz1+kz2)=kl;
kr=((kz1+kz2)/2)*kl;
kd=k1/kr;
the rotation degree is as follows: (jd + kd)/2;
wherein j1 represents the distance between the left shoulder node and the intersection point in the front projection drawing, j2 represents the distance between the right shoulder node and the intersection point in the front projection drawing, k1 represents the distance between the left crotch node and the tailbone node in the front projection drawing, and k2 represents the distance between the right crotch node and the tailbone node in the front projection drawing;
jz1 represents the distance between the left shoulder node and the intersection point in the side projection graph, jz2 represents the distance between the right shoulder node and the intersection point in the side projection graph, kz1 represents the distance between the left crotch node and the coccyx node in the side projection graph, and kz2 represents the distance between the right crotch node and the coccyx node in the side projection graph; jr represents the shoulder radius; kr represents the crotch radius;
the longitudinal distance conversion method comprises the steps of calculating the translation distance of two shoulders according to the centers of two crotch nodes, and calculating the longitudinal movement distance of the two shoulders through a trigonometric function; specifically, the longitudinal transformation calculation method:
Cn=sin(α)
jh=j*Cn;
thus, the longitudinal transition distance is: jh/2; jx = jz; jz/kz is approximately equal to j/k;
thus, the lateral transition distance is: km × (j/k);
wherein jz represents the distance between a left shoulder node and a right shoulder node, kz represents the distance between a left crotch node and a right crotch node, jx represents a parallel line, alpha represents the included angle between a line jz and jx, and km represents the horizontal distance between the midpoint of the line jx and a tailbone node on the line kz;
and the bone recognition module is used for recognizing and obtaining the corresponding on-duty posture by adopting a pre-trained posture recognition model according to the bone characteristics extracted from the front projection drawing.
4. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of any of claims 1-2.
5. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1-2.
6. An attendance platform, comprising:
the service control layer is used for receiving the attendance checking instruction;
the business logic layer is used for analyzing the attendance instruction and acquiring a video image according to the attendance instruction;
the bottom layer assembly is used for carrying out on-duty gesture recognition on the video image by adopting the method of any one of claims 1-2 to obtain an on-duty detection result.
7. An attendance checking method, characterized by comprising:
acquiring an attendance checking instruction;
analyzing the attendance checking instruction, and acquiring a video image according to the attendance checking instruction;
performing on Shift gesture recognition on the video image by the method of any one of claims 1-2 to obtain an on Shift detection result.
CN202011404353.XA 2020-12-04 2020-12-04 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion Active CN112488005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011404353.XA CN112488005B (en) 2020-12-04 2020-12-04 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011404353.XA CN112488005B (en) 2020-12-04 2020-12-04 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion

Publications (2)

Publication Number Publication Date
CN112488005A CN112488005A (en) 2021-03-12
CN112488005B true CN112488005B (en) 2022-10-14

Family

ID=74939359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011404353.XA Active CN112488005B (en) 2020-12-04 2020-12-04 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion

Country Status (1)

Country Link
CN (1) CN112488005B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894278A (en) * 2010-07-16 2010-11-24 西安电子科技大学 Human motion tracing method based on variable structure multi-model
CN109949368A (en) * 2019-03-14 2019-06-28 郑州大学 A kind of human body three-dimensional Attitude estimation method based on image retrieval
CN110425005A (en) * 2019-06-21 2019-11-08 中国矿业大学 The monitoring of transportation of belt below mine personnel's human-computer interaction behavior safety and method for early warning
WO2020207281A1 (en) * 2019-04-12 2020-10-15 腾讯科技(深圳)有限公司 Method for training posture recognition model, and image recognition method and apparatus
CN111914790A (en) * 2020-08-14 2020-11-10 电子科技大学 Real-time human body rotation angle identification method based on double cameras under different scenes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894278A (en) * 2010-07-16 2010-11-24 西安电子科技大学 Human motion tracing method based on variable structure multi-model
CN109949368A (en) * 2019-03-14 2019-06-28 郑州大学 A kind of human body three-dimensional Attitude estimation method based on image retrieval
WO2020207281A1 (en) * 2019-04-12 2020-10-15 腾讯科技(深圳)有限公司 Method for training posture recognition model, and image recognition method and apparatus
CN110425005A (en) * 2019-06-21 2019-11-08 中国矿业大学 The monitoring of transportation of belt below mine personnel's human-computer interaction behavior safety and method for early warning
CN111914790A (en) * 2020-08-14 2020-11-10 电子科技大学 Real-time human body rotation angle identification method based on double cameras under different scenes

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Reconstructing Three-Dimensional Human Poses: A Combined Approach of Iterative Calculation on Skeleton Model and Conformal Geometric Algebra;Xin Huang et.al;《Symmetry》;20190228;第1-26页 *
Robust human pose estimation from distorted wide-angle images through iterative search of transformation parameters;Daisuke Miki et.al;《Signal, Image and Video Processing》;20191119;第693-700页 *
优秀高尔夫球手全挥杆技术运动学特征研究;朱黎明 等;《广州体育学院学报》;20180131;第38卷(第1期);第90-93页 *
空间几何变换算法的可视化处理;杜树杰 等;《现代计算机(专业版)》;20070815;第37-39页 *

Also Published As

Publication number Publication date
CN112488005A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN110852219B (en) Multi-pedestrian cross-camera online tracking system
US8179440B2 (en) Method and system for object surveillance and real time activity recognition
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN111726586A (en) Production system operation standard monitoring and reminding system
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
CN112396658B (en) Indoor personnel positioning method and system based on video
CN111079600A (en) Pedestrian identification method and system with multiple cameras
CN107766819A (en) A kind of video monitoring system and its real-time gait recognition methods
CN111241913A (en) Method, device and system for detecting falling of personnel
CN111160307A (en) Face recognition method and face recognition card punching system
CN111784171A (en) Municipal CIM environmental sanitation work distribution method based on artificial intelligence and image processing
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
CN115546899A (en) Examination room abnormal behavior analysis method, system and terminal based on deep learning
CN113111733B (en) Posture flow-based fighting behavior recognition method
CN114565976A (en) Training intelligent test method and device
CN112488005B (en) On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN113743380B (en) Active tracking method based on video image dynamic monitoring
CN113569594A (en) Method and device for labeling key points of human face
CN115767424A (en) Video positioning method based on RSS and CSI fusion
CN113496200A (en) Data processing method and device, electronic equipment and storage medium
JP7259921B2 (en) Information processing device and control method
CN115862128A (en) Human body skeleton-based customer abnormal behavior identification method
CN115761618A (en) Key site security monitoring image identification method
CN114782675A (en) Dynamic item pricing method and system in safety technical service field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant