CN116311554A - Student classroom abnormal behavior identification method and system based on video target detection - Google Patents

Student classroom abnormal behavior identification method and system based on video target detection Download PDF

Info

Publication number
CN116311554A
CN116311554A CN202310143918.0A CN202310143918A CN116311554A CN 116311554 A CN116311554 A CN 116311554A CN 202310143918 A CN202310143918 A CN 202310143918A CN 116311554 A CN116311554 A CN 116311554A
Authority
CN
China
Prior art keywords
behavior
student
face
abnormal
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310143918.0A
Other languages
Chinese (zh)
Inventor
陈欣
黄杰
贾靖禹
徐骜
张习伟
孙晓
汪萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Original Assignee
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Artificial Intelligence of Hefei Comprehensive National Science Center filed Critical Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority to CN202310143918.0A priority Critical patent/CN116311554A/en
Publication of CN116311554A publication Critical patent/CN116311554A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of video image processing methods, and discloses a method and a system for identifying abnormal behaviors of students in a class based on video object detection, wherein the method comprises the steps of identifying the identity of the students, identifying the abnormal behaviors of the students, and acquiring the identity information of the students with the abnormal behaviors; according to the invention, a single model is utilized to simultaneously acquire each student behavior category and each behavior occurrence track in a class, so that the calculated amount can be reduced; the invention can improve the target behavior position and behavior recognition accuracy by utilizing the characteristic enhancement strategy, can sense the occurrence track of the student behavior, and improves the recognition accuracy of the abnormal behavior student identity.

Description

Student classroom abnormal behavior identification method and system based on video target detection
Technical Field
The invention relates to the field of video image processing methods, in particular to a student classroom abnormal behavior identification method and system based on video target detection.
Background
When in a classroom, the behavior state of the student can reflect the learning state of the student, and in order to better grasp the learning state of the student, the abnormal behavior of the student in the classroom needs to be identified.
The method for identifying the abnormal behaviors of students in class in the prior art comprises the following two steps:
1. based on a single frame image:
the scheme based on the single frame image firstly obtains the position information of the interested target through a target detection algorithm, and then obtains the behavior result of the interested target through a behavior recognition algorithm.
2. Based on video sequences
First, the position of the object of interest in each frame in the whole video sequence is obtained, an image set of the object of interest is constructed according to the position, and then the image set of the object of interest is fed into a behavior classification network based on videos.
The technical defects of the existing identification method are as follows:
the behavior identification based on a single frame image and a video sequence requires that an object of interest is positioned by an object detection algorithm, and then a classification algorithm is used for each object of interest to obtain the object behavior, so that the calculation cost is high; in addition, in the method based on the single frame image, it is difficult to determine the target behavior position and behavior category information based on the single frame image due to motion ghost, occlusion, or camera defocus.
Disclosure of Invention
In order to solve the technical problems, the invention provides a student classroom abnormal behavior identification method and system based on video target detection.
In order to solve the technical problems, the invention adopts the following technical scheme:
a student classroom abnormal behavior identification method based on video target detection comprises the following steps:
step one, acquiring a student lesson video in real time, and acquiring the face position of each student in the student lesson video by using a face detection algorithm; the face feature extraction algorithm is utilized to obtain the face feature of each student face, and the face feature is compared with the student face feature library to obtain the corresponding identity of each face;
step two, obtaining the behavior position, the behavior initial category and the behavior score of each student in each frame of image of the student lesson video;
step three, for each student ST in each frame image j Will student ST j Expanding the current behavior position by S times to serve as a candidate area, and performing cross-ratio matching with other frame images in the candidate area to obtain student ST j Counting the occurrence frequency of each abnormal behavior in all frame images in the behavior positions, the initial classes and the behavior scores matched with other frame images, finding out the abnormal behavior A with the highest occurrence frequency, and judging whether the occurrence times of the abnormal behaviors A of students in all frame images exceeds M times; if yes, consider student ST j Abnormal behavior A occurs, and the behavior position of the abnormal behavior A is obtained; if not, consider student ST j No abnormal behavior a occurs; j is more than or equal to 1 and p is more than or equal to pThe total number of students in class;
step four, for students ST with abnormal behaviors A j Finding out a behavior position B corresponding to the abnormal behavior A with the highest behavior score in each frame of image; finding the face with intersection with the position B in the first step, calculating the distance between each face with intersection and the position B, and obtaining the identity corresponding to the face with the minimum distance as the student ST j Is the identity of (a).
The second step specifically comprises:
2N frames of images are acquired from the video of students in class every T seconds;
data of 2N frame images are 2N by 3H 0 *W 0 Sending to a resnet50 backbone network to obtain 2N 3H 1 *W 1 Features corresponding to the ith frame image are marked as F i I=1, 2,..2N, size C 1 *H 1 *W 1 The method comprises the steps of carrying out a first treatment on the surface of the Wherein H is 0 、W 0 Height and width of each frame of image, H 1 、W 1 、C 1 Respectively are characteristic F i Height, width and number of channels; i=1, 2,..2N;
for each frame of image, performing feature aggregation with other 2N-1 frames of images by using a cross attention method to obtain 2N-1 enhancement features; pixel-level addition is carried out on the 2N-1 enhancement features to obtain the enhancement features of the frame image
Figure BDA0004088554690000021
Enhancement features for each target frame image
Figure BDA0004088554690000022
Acquiring a behavioral position of each student in each target frame image by using a classification detection head and a regression detection head decoupled from yolox>
Figure BDA0004088554690000023
Behavior initial category->
Figure BDA0004088554690000024
Behavioral score->
Figure BDA0004088554690000025
A student classroom abnormal behavior identification system based on video object detection, comprising:
face recognition module: acquiring a student lesson video in real time, and acquiring the face position of each student in the student lesson video by using a face detection algorithm; the face feature extraction algorithm is utilized to obtain the face feature of each student face, and the face feature is compared with the student face feature library to obtain the corresponding identity of each face;
behavior recognition module: acquiring the behavior position, the behavior initial category and the behavior score of each student in each frame of image of the student lesson video;
abnormal behavior recognition module: for each student ST in each frame image j Will student ST j Expanding the current behavior position by S times to serve as a candidate area, and performing cross-ratio matching with other frame images in the candidate area to obtain student ST j Counting the occurrence frequency of each abnormal behavior in all frame images in the behavior positions, the initial classes and the behavior scores matched with other frame images, finding out the abnormal behavior A with the highest occurrence frequency, and judging whether the occurrence times of the abnormal behaviors A of students in all frame images exceeds M times; if yes, consider student ST j Abnormal behavior A occurs, and the behavior position of the abnormal behavior A is obtained; if not, consider student ST j No abnormal behavior a occurs; j is more than or equal to 1 and less than or equal to p, wherein p is the total number of students in class;
identity determination module: for student ST who has abnormal behavior A j Finding out a behavior position B corresponding to the abnormal behavior A with the highest behavior score in each frame of image; finding the face with intersection with the position B in the first step, calculating the distance between each face with intersection and the position B, and obtaining the identity corresponding to the face with the minimum distance as the student ST j Is the identity of (a).
Compared with the prior art, the invention has the beneficial technical effects that:
1. the class behavior algorithm model based on video target detection provided by the invention can simultaneously acquire each student behavior category and behavior occurrence track on a class by utilizing a single algorithm model, and can reduce the calculated amount.
2. For multi-frame pictures, the backbone network of the video target detection model provided by the invention is shared, so that the calculated amount can be reduced.
3. The invention can improve the accuracy of target behavior position and behavior identification by utilizing the characteristic enhancement strategy.
4. The method for acquiring the target student behavior and behavior track information can improve the student behavior recognition accuracy and sense the student behavior occurrence track.
5. The method for acquiring the identity information of the student with abnormal behavior can improve the identification accuracy of the identity of the student with abnormal behavior.
Drawings
FIG. 1 is a flow chart of an abnormal behavior recognition method according to the present invention.
Detailed Description
A preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, the method for identifying abnormal behaviors of students in class based on video object detection in the invention comprises the following steps.
S1, student identity recognition:
after students take class, video data of the students in class are obtained in real time, and face position information of each student in class is obtained by using a face detection algorithm. And for each student face, obtaining face feature information by using a face feature extraction algorithm, and comparing the face feature information with a student face feature library to obtain identity information corresponding to each face. Through this step, the position information and the identity information of each face can be acquired.
S2, identifying abnormal behaviors of students:
in practice, it is difficult to determine the target behavior position and behavior category information based on a single frame picture due to the existence of motion artifacts, occlusion, or camera defocus. For a continuous video, feature enhancement can be performed on target frame images in the video by combining semantic information of the context, and meanwhile, each target behavior position and category information in each target frame image in the video can be accurately identified.
The abnormal behavior recognition model in the invention can simultaneously realize the monitoring and the identity recognition of the abnormal behaviors such as mobile phone playing, sleeping and the like of each student in a class. The abnormal behavior identification model comprises a backbone network module, a characteristic enhancement module, a classification detection head and a regression detection head. The method comprises the following specific steps:
1. and acquiring the behavior position and the behavior initial category information and the behavior score of each student in each frame of image through the abnormal behavior recognition model.
1) 2N frames of images are acquired every T seconds from real-time video data.
2) 2N frames of image data 2N by 3H 0 *W 0 Sending to a resnet50 backbone network to obtain 2N 3H 1 *W 1 Features corresponding to the ith target frame image are marked as F i I=1, 2,..2N, size C 1 *H 1 *W 1 Wherein H is 0 、W 0 Height and width of each frame of image, H 1 、W 1 、C 1 Respectively are characteristic F i Height, width and number of channels; .
3) For each frame of image, performing feature aggregation with other 2N-1 frames of images by using a cross-attention (cross-attention) method to obtain 2N-1 enhancement features; pixel-level addition is carried out on the 2N-1 enhancement features to obtain the enhancement features of the frame image
Figure BDA0004088554690000041
4) Enhancement features for each target frame image
Figure BDA0004088554690000042
Acquiring a behavioral position of each student in each target frame image by using a classification detection head and a regression detection head decoupled from yolox>
Figure BDA0004088554690000051
Behavior initial category->
Figure BDA0004088554690000052
Behavioral score->
Figure BDA0004088554690000053
The set of compositions is marked->
Figure BDA0004088554690000054
j=1, 2, &..p, p is the total number of students on class.
2. Obtaining target student behavior and behavior track information
Under a classroom scene, the activity range of students is limited, so that for each target student in each target frame image, the current behavior position is utilized, S times is enlarged to be used as a candidate area, cross-union ratio (iou) matching is carried out between the candidate area and other frame images to obtain matching position information of the current target student in other 2N-1 frame images, corresponding behavior initial category information and behavior score information, the occurrence frequency of each behavior in 2N frames is counted, the behavior A with the highest occurrence frequency is found, whether the occurrence frequency of the target student behavior A in the 2N frames exceeds M times is judged, if so, the occurrence frequency of the target student behavior A in the past T time can be defined, meanwhile, the track information of the current target student which happens the behavior A in the past T time can be obtained, and the behavior occurrence track of the student is perceived while the behavior identification accuracy of the student is improved; if not, the target student is considered to have no abnormal behavior in the class in the past T time.
S3, acquiring student identity information with abnormal behaviors:
for a target student with abnormal behaviors, finding a behavior position B corresponding to the abnormal behavior with the highest behavior score in each frame image, finding a face intersected with the B in the step one, calculating the distance between the face intersected with the B and the position B, and taking the identity corresponding to the face with the minimum distance as the identity information of the target student. The distance here generally refers to the center distance.
The invention provides an end-to-end classroom abnormal behavior identification method based on video target detection, which can acquire target student behavior and behavior track information, acquire abnormal behavior student identity and improve target behavior position and behavior identification accuracy by utilizing a feature enhancement strategy.
The system and the method correspond to each other, and the preferred scheme of the method is also applicable to the system.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a single embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to specific embodiments, and that the embodiments may be combined appropriately to form other embodiments that will be understood by those skilled in the art.

Claims (3)

1. A student classroom abnormal behavior identification method based on video target detection comprises the following steps:
step one, acquiring a student lesson video in real time, and acquiring the face position of each student in the student lesson video by using a face detection algorithm; the face feature extraction algorithm is utilized to obtain the face feature of each student face, and the face feature is compared with the student face feature library to obtain the corresponding identity of each face;
step two, obtaining the behavior position, the behavior initial category and the behavior score of each student in each frame of image of the student lesson video;
step three, for each student ST in each frame image j Will learn toRaw ST j Expanding the current behavior position by S times to serve as a candidate area, and performing cross-ratio matching with other frame images in the candidate area to obtain student ST j Counting the occurrence frequency of each abnormal behavior in all frame images in the behavior positions, the initial classes and the behavior scores matched with other frame images, finding out the abnormal behavior A with the highest occurrence frequency, and judging whether the occurrence times of the abnormal behaviors A of students in all frame images exceeds M times; if yes, consider student ST j Abnormal behavior A occurs, and the behavior position of the abnormal behavior A is obtained; if not, consider student ST j No abnormal behavior a occurs; j is more than or equal to 1 and less than or equal to p, wherein p is the total number of students in class;
step four, for students ST with abnormal behaviors A j Finding out a behavior position B corresponding to the abnormal behavior A with the highest behavior score in each frame of image; finding the face with intersection with the position B in the first step, calculating the distance between each face with intersection and the position B, and obtaining the identity corresponding to the face with the minimum distance as the student ST j Is the identity of (a).
2. The method for identifying abnormal behaviors of students in class based on video object detection according to claim 1, wherein the step two specifically comprises:
2N frames of images are acquired from the video of students in class every T seconds;
data of 2N frame images are 2N by 3H 0 *W 0 Sending to a resnet50 backbone network to obtain 2N 3H 1 *W 1 Features corresponding to the ith frame image are marked as F i I=1, 2,..2N, size C 1 *H 1 *W 1 The method comprises the steps of carrying out a first treatment on the surface of the Wherein H is 0 、W 0 Height and width of each frame of image, H 1 、W 1 、C 1 Respectively are characteristic F i Height, width and number of channels; i=1, 2,..2N;
for each frame of image, performing feature aggregation with other 2N-1 frames of images by using a cross attention method to obtain 2N-1 enhancement features; pixel-level addition is carried out on the 2N-1 enhancement features to obtain the enhancement features of the frame image
Figure FDA0004088554670000011
Enhancement features for each target frame image
Figure FDA0004088554670000012
Acquiring a behavioral position of each student in each target frame image by using a classification detection head and a regression detection head decoupled from yolox>
Figure FDA0004088554670000013
Behavior initial category->
Figure FDA0004088554670000014
Behavior score
Figure FDA0004088554670000021
3. A student classroom abnormal behavior identification system based on video object detection, comprising:
face recognition module: acquiring a student lesson video in real time, and acquiring the face position of each student in the student lesson video by using a face detection algorithm; the face feature extraction algorithm is utilized to obtain the face feature of each student face, and the face feature is compared with the student face feature library to obtain the corresponding identity of each face;
behavior recognition module: acquiring the behavior position, the behavior initial category and the behavior score of each student in each frame of image of the student lesson video;
abnormal behavior recognition module: for each student ST in each frame image j Will student ST j Expanding the current behavior position by S times to serve as a candidate area, and performing cross-ratio matching with other frame images in the candidate area to obtain student ST j Counting the occurrence frequency of each abnormal behavior in all frame images in the behavior positions, the initial classes and the behavior scores matched with other frame images, finding the abnormal behavior A with the highest occurrence frequency,judging whether the occurrence times of the student abnormal behaviors A in all the frame images exceeds M times; if yes, consider student ST j Abnormal behavior A occurs, and the behavior position of the abnormal behavior A is obtained; if not, consider student ST j No abnormal behavior a occurs; j is more than or equal to 1 and less than or equal to p, wherein p is the total number of students in class;
identity determination module: for student ST who has abnormal behavior A j Finding out a behavior position B corresponding to the abnormal behavior A with the highest behavior score in each frame of image; finding the face with intersection with the position B in the first step, calculating the distance between each face with intersection and the position B, and obtaining the identity corresponding to the face with the minimum distance as the student ST j Is the identity of (a).
CN202310143918.0A 2023-02-14 2023-02-14 Student classroom abnormal behavior identification method and system based on video target detection Pending CN116311554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310143918.0A CN116311554A (en) 2023-02-14 2023-02-14 Student classroom abnormal behavior identification method and system based on video target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310143918.0A CN116311554A (en) 2023-02-14 2023-02-14 Student classroom abnormal behavior identification method and system based on video target detection

Publications (1)

Publication Number Publication Date
CN116311554A true CN116311554A (en) 2023-06-23

Family

ID=86835244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310143918.0A Pending CN116311554A (en) 2023-02-14 2023-02-14 Student classroom abnormal behavior identification method and system based on video target detection

Country Status (1)

Country Link
CN (1) CN116311554A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152688A (en) * 2023-10-31 2023-12-01 江西拓世智能科技股份有限公司 Intelligent classroom behavior analysis method and system based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152688A (en) * 2023-10-31 2023-12-01 江西拓世智能科技股份有限公司 Intelligent classroom behavior analysis method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN112906631B (en) Dangerous driving behavior detection method and detection system based on video
CN113112416B (en) Semantic-guided face image restoration method
CN113762107B (en) Object state evaluation method, device, electronic equipment and readable storage medium
CN110458115B (en) Multi-frame integrated target detection algorithm based on time sequence
CN116311554A (en) Student classroom abnormal behavior identification method and system based on video target detection
Zhang et al. Detecting and removing visual distractors for video aesthetic enhancement
CN113705510A (en) Target identification tracking method, device, equipment and storage medium
CN110866473B (en) Target object tracking detection method and device, storage medium and electronic device
CN112801536A (en) Image processing method and device and electronic equipment
Guo et al. Open-eye: An open platform to study human performance on identifying ai-synthesized faces
CN111353439A (en) Method, device, system and equipment for analyzing teaching behaviors
Vázquez et al. Virtual worlds and active learning for human detection
Wang et al. Yolov5 enhanced learning behavior recognition and analysis in smart classroom with multiple students
Chen et al. Sound to visual: Hierarchical cross-modal talking face video generation
CN117459661A (en) Video processing method, device, equipment and machine-readable storage medium
US20230290118A1 (en) Automatic classification method and system of teaching videos based on different presentation forms
CN114882570A (en) Remote examination abnormal state pre-judging method, system, equipment and storage medium
CN113688739A (en) Classroom learning efficiency prediction method and system based on emotion recognition and visual analysis
CN111325185B (en) Face fraud prevention method and system
CN114495232A (en) Classroom student action recognition method and system based on deep learning
CN114550032A (en) Video smoke detection method of end-to-end three-dimensional convolution target detection network
CN112528790A (en) Teaching management method and device based on behavior recognition and server
CN115967837A (en) Method, device, equipment and medium for content interaction based on web course video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination