CN111241926A - Attendance checking and learning condition analysis method, system, equipment and readable storage medium - Google Patents

Attendance checking and learning condition analysis method, system, equipment and readable storage medium Download PDF

Info

Publication number
CN111241926A
CN111241926A CN201911387760.1A CN201911387760A CN111241926A CN 111241926 A CN111241926 A CN 111241926A CN 201911387760 A CN201911387760 A CN 201911387760A CN 111241926 A CN111241926 A CN 111241926A
Authority
CN
China
Prior art keywords
face
attendance
image
characteristic value
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911387760.1A
Other languages
Chinese (zh)
Inventor
何学智
林林
余泽凡
刘小扬
黄自力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Newland Digital Technology Co ltd
Original Assignee
Newland Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Newland Digital Technology Co ltd filed Critical Newland Digital Technology Co ltd
Priority to CN201911387760.1A priority Critical patent/CN111241926A/en
Publication of CN111241926A publication Critical patent/CN111241926A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Studio Devices (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Software Systems (AREA)

Abstract

The invention discloses an attendance and academic situation analysis method, which comprises the steps of collecting video images and carrying out hardware decoding on the video images; extracting a face characteristic value of the video image at an edge end, and uploading the face characteristic value to a server for attendance checking; estimating the human body posture in the video image at the edge end, and analyzing the learning situation behavior; matching the face characteristic value with the emotional behavior analysis result through a time stamp and a position; and uploading the emotional behavior analysis result matched with the face characteristic value to the server. The scheme realizes automatic attendance checking of students and automatic analysis of learning situation of the students, and the originally erected camera in the available classroom is used, so that the deployment overhead is reduced. In addition, since the edge calculation is adopted to replace the traditional server scheme to make the calculation to the front end to obtain the calculation result, only the AI calculation result is transmitted on the network data transmission instead of the video data, so that a large amount of bandwidth can be reduced, and the speed is increased.

Description

Attendance checking and learning condition analysis method, system, equipment and readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an attendance and learning condition analysis method, system, equipment and readable storage medium.
Background
In recent years, with the development of deep learning technology and the support of national policies, artificial intelligence technology based on deep learning is increasingly widely applied in the field of education. Attendance and study situation analysis, as two major pieces of teaching management, naturally also receive a great deal of attention. At present, automatic attendance management is mainly performed by using AI face recognition and is deployed in a server. The attendance system based on face recognition generally comprises a camera for collecting face images and a face recognition module for comparing the current face images with initial face information prestored in the face recognition system, and the attendance system is verified to be successful. The AI algorithm is mostly deployed in a PC server or a cloud, and has the problems of no good decoding function, high cost, large volume, high configuration requirement, low speed and the like. In addition, the traditional algorithm cannot help learning situation analysis at present, whether students in a classroom learn or not cannot be identified, and learning situation analysis can be completed with high accuracy only through the AI algorithm, but the method still faces the problems of high configuration requirement, low speed and the like.
Disclosure of Invention
The invention aims to provide a method, a system, equipment and a readable storage medium for checking attendance and studying condition analysis, which have high speed and low requirement on hardware environment.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, the invention provides an attendance and academic situation analysis method, which comprises the following steps:
acquiring a video image, and performing hardware decoding on the video image;
extracting a face characteristic value of the video image at an edge end, and uploading the face characteristic value to a server for attendance checking;
estimating the human body posture in the video image at the edge end, and analyzing the learning situation behavior;
matching the face characteristic value with the emotional behavior analysis result through a time stamp and a position;
and uploading the emotional behavior analysis result matched with the face characteristic value to the server.
Preferably, the steps of extracting the face characteristic value of the video image at the edge end and uploading the face characteristic value to a server for attendance checking are as follows:
detecting a face in the image, and acquiring face coordinates and key point coordinates;
carrying out alignment correction on the detected face;
carrying out face tracking on the face in the video image;
detecting the inclination degree of the face, and filtering out images with the inclination degree larger than a threshold value;
and acquiring the face characteristics of the image, and uploading the face characteristic value to a server for attendance checking.
Preferably, the human face in the video image is subjected to face tracking through intersection and comparison, and when the overlapping rate of the human face frame of the current frame image and the human face frame of the previous frame image is greater than a preset value, it is determined that the human face of the current frame human face frame and the human face of the previous frame human face frame are the same human face.
Preferably, the step of detecting the inclination degree of the human face and filtering out the image with the inclination degree greater than the threshold value includes:
acquiring point coordinates of a nose, a left eye, a right eye, a left mouth angle and a right mouth angle of a human face;
the minimum distance from the nose to the connecting line of the left eye and the left mouth corner is a first distance, the minimum distance from the nose to the connecting line of the right eye and the right mouth corner is a second distance, the smaller value of the first distance and the second distance is divided by the width of the face rectangular frame, and the image is filtered when the ratio is smaller than a threshold value.
Preferably, the human body posture estimation is realized through a human body posture estimation algorithm OpenPose, joints of all people in the image are detected as key points, then the detected key points are distributed to each corresponding person, and the situation learning behavior analysis is performed through matching of the position relationship between the key points of each person and each standard action mode.
Preferably, the system comprises an image acquisition module, a server and an edge calculation processor;
the image acquisition module: collecting a video image;
the server: the face characteristic value and the learning situation behavior analysis result uploaded by the result uploading module are received and stored;
the edge calculation processor includes:
a hardware decoding module: performing hardware decoding on the video image;
a face recognition module: extracting face characteristic values of the video images;
the study situation analysis module: estimating the human body posture in the video image at the edge end, and analyzing the learning situation behavior;
a matching module: matching the face characteristic value with the emotional behavior analysis result through a timestamp and a position;
a result uploading module: and uploading the face characteristic value and the learning situation behavior analysis result to a server.
Preferably, the image acquisition module comprises a switch, a network video recorder and a plurality of cameras.
Preferably, the edge calculation processor has a hardware decoding function and a neural network calculation acceleration engine.
In a third aspect, the present invention further provides an attendance and academic aptitude analysis apparatus, including a memory, a processor, and a computer program stored on the memory and operable on the processor, where the processor executes the computer program to implement the steps of the attendance and academic aptitude analysis method.
In a fourth aspect, the present invention further provides a readable storage medium for attendance and academic aptitude analysis, on which a computer program is stored, the computer program being executed by a processor to implement the steps of the attendance and academic aptitude analysis method.
By adopting the technical scheme, the collected video image is subjected to local hardware decoding, the student is checked in attendance through face recognition, individual learning behavior is analyzed through human body posture estimation, and automatic checking in attendance and learning analysis are realized by matching the corresponding face. According to the scheme, the edge calculation is adopted to replace the traditional server scheme to carry out calculation to the front end to obtain a calculation result, and the AI calculation result is transmitted instead of the video data in the network data transmission, so that a large amount of bandwidth can be reduced, and the speed can be increased.
Drawings
FIG. 1 is a flowchart illustrating steps of an attendance and academic aptitude analysis method according to an embodiment of the present invention;
FIG. 2 is a flowchart of step S20 in FIG. 1;
FIG. 3 is a schematic diagram of human face frame intersection and comparison of human face tracking in the attendance and academic aptitude analysis method of the present invention;
FIG. 4 is a schematic diagram of an image acquisition module according to an embodiment of the attendance and academic aptitude analysis system of the present invention;
fig. 5 is a schematic diagram of the attendance and academic situation analysis system of the invention.
In the figure, 10-an image acquisition module, 20-an edge calculation processor, 30-a server, 21-a hardware decoding module, 22-a face recognition module, 23-a learning condition analysis module, 24-a matching module and 25-a matching module.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, the invention provides an attendance and academic situation analysis method, which comprises the following steps:
s10: acquiring a video image, and performing hardware decoding on the video image; in the embodiment of the invention, the RTSP video stream is used for hardware decoding.
S20: extracting face characteristic values of the video images at the edge end, and uploading the face characteristic values to a server for attendance checking;
it should be noted that the server receives the face feature value, compares the face feature value with the feature value of the face picture in the database, completes face recognition, and further realizes attendance work.
S30: estimating the human body posture in the video image at the edge end, and analyzing the learning situation behavior; specifically, human posture estimation is achieved through a human posture estimation algorithm OpenPose, joints of all people in the image are detected to serve as key points, then the detected key points are distributed to each corresponding person, matching is conducted through the position relation between the key points of each person and each standard action mode, and emotion learning behavior analysis is conducted.
In the embodiment of the invention, the human posture key points are obtained through a human posture estimation algorithm, so that different behaviors of a person, including learning, lifting hands, playing mobile phones, lying down on a desk and the like, are calculated, and the analysis of learning behavior is facilitated.
The human body posture estimation uses openpos, one of several popular multi-person human body posture estimation algorithms at present. Openpos first detects key points of all people in an image, and then assigns the detected key points to each corresponding person.
The openpos network first extracts features from the image using the previous network layers. These features are then passed to two parallel convolutional layer branches. The first branch is used to predict 18 confidence maps, each representing a joint in the human skeleton. The second branch predicts a set of 38 joint affine fields (PAFs) describing the degree of articulation between the joints.
S40: matching the face characteristic value with the emotional behavior analysis result through the timestamp and the position;
s50: and uploading the learning situation behavior analysis result matched with the face characteristic value to a server. And uploading the result of each frame identification to a server through FTP for summary analysis.
It should be noted that, in the embodiment of the present invention, 2 frames per second of face detection, screening, face alignment, face recognition, human body posture estimation, and emotional analysis are performed.
It should be noted that, in this embodiment, the attendance checking work is completed first, and the emotion analysis needs to be performed for a long time, and if the emotion behavior analysis result received by the server and matched with the face feature value is not matched with the corresponding student in the database, the emotion analysis result is invalid.
Referring to fig. 2, specifically, the steps of extracting the face feature value of the video image at the edge end, and uploading the face feature value to the server for attendance checking include:
s21: detecting a face in the image, and acquiring face coordinates and key point coordinates;
s22: carrying out alignment correction on the detected face; because the human faces in the actual scene have various angles, the human face alignment is to correct the detected human faces as much as possible so as to improve the accuracy of human face recognition.
S23: referring to fig. 3, face tracking is performed on a face in a video image; and when the overlapping rate of the face frame of the current frame image and the face frame of the previous frame image is greater than a preset value, judging that the face of the current frame face frame and the face of the previous frame face frame are the same face. Because most students are in seats in a classroom scene, the face recognition efficiency can be well improved through face tracking.
S24: because various angles exist in the human face in practice, the accuracy of human face recognition can be greatly influenced for a side face with a large angle, so that false recognition is caused or the recognition is not passed, and therefore, the human face angle judgment needs to be carried out before the human face feature extraction. In the embodiment, the inclination degree of the face is detected, and the image with the inclination degree larger than a threshold value is filtered; the step of filtering out the image with the inclination degree greater than the threshold value comprises the following steps:
acquiring point coordinates of a nose, a left eye, a right eye, a left mouth angle and a right mouth angle of a human face;
the minimum distance between the nose and the connecting line of the left eye and the left mouth corner is a first distance, the minimum distance between the nose and the connecting line of the right eye and the right mouth corner is a second distance, the smaller value of the first distance and the second distance is divided by the width of the face rectangular frame, and the image is filtered when the ratio is smaller than a threshold value.
S25: and acquiring the face characteristics of the image, and uploading the face characteristic value to a server for attendance checking. The service industry compares the face characteristic value with the pre-extracted personnel characteristics to obtain the personnel ID with the maximum similarity as the identification result.
In the embodiment of the invention, 512-dimensional depth features are extracted from a face picture through a deep learning AI algorithm, and then the person ID with the maximum similarity is obtained through comparison with the pre-extracted person features.
In order to improve the face recognition precision, the invention provides dynamic high-resolution face feature extraction. The algorithm flow is as follows: 1) calculating the area of a face frame obtained by face detection; 2) backtracking to video sources with different resolutions through the area size of a face frame, intercepting the face to extract features, and if the area of the face is less than 96 × 112, extracting the features on a 500W video stream; if the area is larger than 96 × 112 and smaller than 128 × 144, extracting features from the face image intercepted from the 400W video stream; if the area is larger than 128-144, extracting features from the human face on the 200W video stream.
On the other hand, the invention provides an attendance and academic situation analysis system, which comprises an image acquisition module 10, a server 30 and an edge calculation processor 20;
the image acquisition module 10: collecting a video image;
the server 30: the face characteristic value and the learning behavior analysis result uploaded by the result uploading module 25 are received and stored;
the edge calculation processor 20 includes:
the hardware decoding module 21: performing hardware decoding on the video image;
the face recognition module 22: extracting face characteristic values of the video images;
the learning context analysis module 23: estimating the human body posture in the video image at the edge end, and analyzing the learning situation behavior;
the matching module 24: matching the face characteristic value with the emotional behavior analysis result through a timestamp and a position;
result uploading module 25: and uploading the face characteristic value and the learning behavior analysis result to a server.
Referring to fig. 4, in particular, the image capturing module includes a switch, a network video recorder, and a plurality of cameras.
Specifically, the edge calculation processor 20 has a hardware decoding function, and a neural network calculation acceleration engine.
The edge calculation processor 20 in this embodiment uses haisi Hi3559A, which is a professional 8K Ultra HDMobile camera soc that provides digital video recording of 8K30/4K120 broadcast-level image quality, supports h.265 encoded output or RAW data output of the film-vision level, and integrates high-performance ISP processing. The advanced multi-channel 4K Sensor input in the industry is supported, multi-channel ISP image processing is realized, a unique SVP platform of Haisi is integrated, efficient and rich computing resources are provided, and customers are supported to develop various computer vision applications.
In another aspect, the present invention further provides an attendance and academic aptitude analysis apparatus, including a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the attendance and academic aptitude analysis method when executing the program.
In another aspect, the present invention further provides a readable storage medium for attendance and academic aptitude analysis, on which a computer program is stored, the computer program being executed by a processor to implement the steps of the attendance and academic aptitude analysis method.
By adopting the technical scheme, the collected video image is subjected to local hardware decoding, the student is checked in attendance through face recognition, individual learning behavior is analyzed through human body posture estimation, and automatic checking in attendance and learning analysis are realized by matching the corresponding face. According to the scheme, the edge calculation is adopted to replace the traditional server scheme to carry out calculation to the front end to obtain a calculation result, and the AI calculation result is transmitted instead of the video data in the network data transmission, so that a large amount of bandwidth can be reduced, and the speed can be increased.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, and the scope of protection is still within the scope of the invention.

Claims (10)

1. An attendance and learning condition analysis method is characterized by comprising the following steps:
acquiring a video image, and performing hardware decoding on the video image;
extracting a face characteristic value of the video image at an edge end, and uploading the face characteristic value to a server for attendance checking;
estimating the human body posture in the video image at the edge end, and analyzing the learning situation behavior;
matching the face characteristic value with the emotional behavior analysis result through a time stamp and a position;
and uploading the emotional behavior analysis result matched with the face characteristic value to the server.
2. The attendance and academic situation analysis method according to claim 1, wherein the steps of extracting the face feature value of the video image at the edge end and uploading the face feature value to the server for attendance are as follows:
detecting a face in the image, and acquiring face coordinates and key point coordinates;
carrying out alignment correction on the detected face;
carrying out face tracking on the face in the video image;
detecting the inclination degree of the face, and filtering out images with the inclination degree larger than a threshold value;
and acquiring the face characteristics of the image, and uploading the face characteristic value to a server for attendance checking.
3. The attendance and emotion analysis method according to claim 2, wherein face tracking is performed by cross-comparing faces in the video image, and when the overlap ratio of the face frame of the current frame image to the face frame of the previous frame image is greater than a preset value, it is determined that the current frame face and the previous frame face are the same face.
4. The attendance and academic situation analysis method according to claim 2, wherein the step of detecting the inclination degree of the face and filtering out the image with the inclination degree greater than the threshold value comprises:
acquiring point coordinates of a nose, a left eye, a right eye, a left mouth angle and a right mouth angle of a human face;
the minimum distance from the nose to the connecting line of the left eye and the left mouth corner is a first distance, the minimum distance from the nose to the connecting line of the right eye and the right mouth corner is a second distance, the smaller value of the first distance and the second distance is divided by the width of the face rectangular frame, and the image is filtered when the ratio is smaller than a threshold value.
5. The attendance and emotion analysis method according to claim 1, wherein human posture estimation is implemented by a human posture estimation algorithm openpos, joints of all persons in the image are detected as key points, the detected key points are assigned to each corresponding person, and emotion behavior analysis is performed by matching the positional relationship between the key points of each person with each standard motion pattern.
6. An attendance and learning situation analysis system is characterized by comprising an image acquisition module, a server and an edge calculation processor;
the image acquisition module: collecting a video image;
the server: the face characteristic value and the learning situation behavior analysis result uploaded by the result uploading module are received and stored;
the edge calculation processor includes:
a hardware decoding module: performing hardware decoding on the video image;
a face recognition module: extracting face characteristic values of the video images;
the study situation analysis module: estimating the human body posture in the video image at the edge end, and analyzing the learning situation behavior;
a matching module: matching the face characteristic value with the emotional behavior analysis result through a timestamp and a position;
a result uploading module: and uploading the face characteristic value and the learning situation behavior analysis result to a server.
7. The system of claim 6, wherein the image capture module comprises a switch, a network video recorder, and a plurality of cameras.
8. The system of claim 6, wherein the edge computing processor has a hardware decoding function and a neural network computing acceleration engine.
9. An attendance and learning situation analysis device, comprising a memory, a processor and a computer program stored on the memory and operable on the processor, characterized in that: the processor, when executing the program, implements the steps of the attendance and academic aptitude analysis method of any one of claims 1-7.
10. A readable storage medium having stored thereon a computer program for attendance and learning analysis, the computer program comprising: the computer program when executed by a processor implements the steps of the attendance and academic aptitude analysis method of any one of claims 1 to 7.
CN201911387760.1A 2019-12-30 2019-12-30 Attendance checking and learning condition analysis method, system, equipment and readable storage medium Pending CN111241926A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911387760.1A CN111241926A (en) 2019-12-30 2019-12-30 Attendance checking and learning condition analysis method, system, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911387760.1A CN111241926A (en) 2019-12-30 2019-12-30 Attendance checking and learning condition analysis method, system, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111241926A true CN111241926A (en) 2020-06-05

Family

ID=70874089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911387760.1A Pending CN111241926A (en) 2019-12-30 2019-12-30 Attendance checking and learning condition analysis method, system, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111241926A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287844A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Student situation analysis method and device, electronic device and storage medium
CN112613342A (en) * 2020-11-27 2021-04-06 深圳市捷视飞通科技股份有限公司 Behavior analysis method and apparatus, computer device, and storage medium
CN112926464A (en) * 2021-03-01 2021-06-08 创新奇智(重庆)科技有限公司 Face living body detection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 The two-way assessment system of Classroom instruction quality based on deep learning
CN109598809A (en) * 2018-12-05 2019-04-09 上海创视通软件技术有限公司 A kind of check class attendance method and system based on recognition of face
KR20190137384A (en) * 2018-06-01 2019-12-11 서울대학교산학협력단 Attendance check system and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190137384A (en) * 2018-06-01 2019-12-11 서울대학교산학협력단 Attendance check system and method thereof
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 The two-way assessment system of Classroom instruction quality based on deep learning
CN109598809A (en) * 2018-12-05 2019-04-09 上海创视通软件技术有限公司 A kind of check class attendance method and system based on recognition of face

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287844A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Student situation analysis method and device, electronic device and storage medium
CN112613342A (en) * 2020-11-27 2021-04-06 深圳市捷视飞通科技股份有限公司 Behavior analysis method and apparatus, computer device, and storage medium
CN112926464A (en) * 2021-03-01 2021-06-08 创新奇智(重庆)科技有限公司 Face living body detection method and device
CN112926464B (en) * 2021-03-01 2023-08-29 创新奇智(重庆)科技有限公司 Face living body detection method and device

Similar Documents

Publication Publication Date Title
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN110458895B (en) Image coordinate system conversion method, device, equipment and storage medium
CN110363131B (en) Abnormal behavior detection method, system and medium based on human skeleton
CN109145717B (en) Face recognition method for online learning
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
JP7292492B2 (en) Object tracking method and device, storage medium and computer program
CN114332911A (en) Head posture detection method and device and computer equipment
JP2007052609A (en) Hand area detection device, hand area detection method and program
CN112966574A (en) Human body three-dimensional key point prediction method and device and electronic equipment
CN110969045A (en) Behavior detection method and device, electronic equipment and storage medium
CN113705510A (en) Target identification tracking method, device, equipment and storage medium
CN111382655A (en) Hand-lifting behavior identification method and device and electronic equipment
CN114332927A (en) Classroom hand-raising behavior detection method, system, computer equipment and storage medium
CN111626212B (en) Method and device for identifying object in picture, storage medium and electronic device
US10438066B2 (en) Evaluation of models generated from objects in video
CN116246299A (en) Low-head-group intelligent recognition system combining target detection and gesture recognition technology
CN112580526A (en) Student classroom behavior identification system based on video monitoring
CN113496200A (en) Data processing method and device, electronic equipment and storage medium
CN112200698A (en) Campus social relationship big data analysis system based on artificial intelligence
CN220913571U (en) Intelligent terminal for intelligent campus online examination
CN112488005B (en) On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN113269013B (en) Object behavior analysis method, information display method and electronic equipment
CN112700494A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN114937227A (en) Primary and secondary school student movement scoring system based on machine vision
CN116798109A (en) Method and device for identifying action type

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination