CN112257591A - Remote video teaching quality evaluation method and system based on machine vision - Google Patents

Remote video teaching quality evaluation method and system based on machine vision Download PDF

Info

Publication number
CN112257591A
CN112257591A CN202011138213.2A CN202011138213A CN112257591A CN 112257591 A CN112257591 A CN 112257591A CN 202011138213 A CN202011138213 A CN 202011138213A CN 112257591 A CN112257591 A CN 112257591A
Authority
CN
China
Prior art keywords
learning
student
image
face
classroom
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011138213.2A
Other languages
Chinese (zh)
Inventor
赵峰
李唯哲
王志会
卞涧泉
马冰岛
陈琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Tiansheng Intelligent Technology Co ltd
Original Assignee
Anhui Tiansheng Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Tiansheng Intelligent Technology Co ltd filed Critical Anhui Tiansheng Intelligent Technology Co ltd
Priority to CN202011138213.2A priority Critical patent/CN112257591A/en
Publication of CN112257591A publication Critical patent/CN112257591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1091Recording time for administrative or management purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Educational Administration (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Biology (AREA)
  • Educational Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Economics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a remote video teaching quality evaluation method and system based on machine vision, which comprises the following steps: acquiring a face image, a learning state image and a classroom scene image of each student; matching the periodically acquired face images with a preset face sample library after 3D conversion to obtain student identity information; the method comprises the steps that periodically collected student learning state images are transmitted to a pre-trained deep neural network model, and the learning behavior of each student is obtained; performing real-time image processing on a classroom scene image to obtain abnormal personnel gathering information; after learning is finished, the identity information and the learning behavior of the student are counted and analyzed, the attendance, the learning quality score and the comment of the student are obtained according to the attendance and learning quality evaluation rule, and automatic attendance of the student in remote teaching is realized; the remote teaching monitoring method completes automatic attendance checking of students, automatic evaluation of learning quality and maintenance of teaching order in remote teaching.

Description

Remote video teaching quality evaluation method and system based on machine vision
Technical Field
The invention relates to the technical field of remote teaching monitoring, in particular to a remote video teaching quality evaluation method and system based on machine vision.
Background
The progress of science and technology makes the traditional teaching mode change greatly. Originally, a teacher has gradually been put forward to video teaching and remote network teaching in a classroom, namely, a large number of students use a large-screen television in the classroom to attend a class by watching video courseware or teaching videos of remote network teachers, and the teacher does not need to teach knowledge in the classroom; in the traditional teaching mode, a teacher needs to teach knowledge to students, maintain classroom discipline and guarantee the learning quality of the students when taking classes in a classroom, and the students can obtain the knowledge by watching video courseware when adopting video teaching and remote network teaching.
In order to guarantee classroom learning discipline and student learning quality during video teaching and remote network teaching in the prior art, people are provided with an access control system at a classroom entrance for limiting the entrance and exit of students; a common access control system adopts a portrait recognition mode, an authorized student is confirmed to be valid through portrait recognition, and the access control is opened and allowed to enter a classroom for learning; other people cannot enter the classroom through the entrance guard. Therefore, irrelevant personnel can be effectively prevented from entering a classroom, and some 'substitute' and 'substitute' personnel can be prevented from entering the classroom, so that the learning discipline of the classroom is partially guaranteed, but the teaching order problem caused by authorized students cannot be solved in the mode, the teaching discipline cannot be maintained in time, and the learning quality of the students cannot be guaranteed.
In addition, in order to guarantee the classroom learning discipline and the student learning quality during video teaching and remote network teaching, people try to arrange a monitoring camera in a classroom, transmit video images in the classroom back to a school monitoring room in real time, play the video images on a designated display, and observe the learning discipline in the classroom and the learning state of the student by a relevant responsible teacher or manager; however, in this way, video streams collected by a plurality of cameras in a plurality of classrooms need to be concentrated on the display wall of a monitoring room or played on a monitoring computer, managers are difficult to find some emergency events occurring in the classrooms in time, meanwhile, the video pictures collected by the cameras have perspective phenomena of large and small sizes, students far away from the cameras are small in the video pictures, and it is difficult to see whether the students normally listen to the class; although the learning pictures in the classroom are collected in real time and displayed, recorded and played back in the monitoring room, the method can play a certain deterrent role for students who do not learn well, but still has the problems of ensuring that the learning discipline of the classroom is not timely and ensuring the learning quality of the students is incomplete, and is more difficult to make standardized and intelligent evaluation on the learning quality of all the students. Therefore, aiming at how to guarantee the classroom discipline of the student during learning and how to guarantee the learning quality of the student during learning, a relatively effective way is not provided for improvement.
Disclosure of Invention
Based on the technical problems in the background art, the invention provides a remote video teaching quality assessment method and system based on machine vision, which ensure the classroom discipline of a student during learning and ensure the learning quality of the student during learning.
The invention provides a remote video teaching quality evaluation method based on machine vision, which comprises the following steps:
the method comprises the steps of periodically obtaining face images and learning state images of students through ball machine scanning, and obtaining classroom scene images in a classroom in real time through a gun camera;
matching the periodically acquired face images with a preset face sample library after 3D conversion to obtain student identity information, and recording and storing the student identity information;
the method comprises the steps that (1) periodically acquired learner learning state images are transmitted to a pre-trained deep neural network model, and the learning behavior of each learner is obtained, recorded and stored;
performing real-time image processing on a classroom scene image to obtain abnormal personnel gathering information;
after learning is finished, the recorded and stored student identity information is counted and analyzed, attendance scores and comments of students are obtained according to attendance rules, and automatic attendance of the students in remote teaching is realized;
after learning is finished, the recorded and stored learning behaviors of the student are counted and analyzed, and according to the learning quality evaluation rule, the learning quality score and comment of the student are obtained, so that the automatic evaluation of the learning quality of the remote teaching student is realized;
in the teaching process, abnormal gathering information of personnel is detected, if the abnormal gathering of the personnel occurs, warning information is output to a supervision workstation, and an operator on duty is informed to come for processing, so that the teaching order maintenance of remote teaching is realized.
Further, after the periodically collected face images are subjected to 3D conversion, the periodically collected face images are matched with a preset face sample library to obtain student identity information, and the student identity information is recorded and stored, including:
periodically acquiring images of students in a classroom by scanning through a dome camera, and extracting face images in the images;
extracting face parameters in the face image, and corresponding the face parameters to a preset face 3D parameter model to obtain a reshaped current face 3D model;
and extracting a face front image in the current face 3D model, and matching the face front image with a preset face sample library to obtain student identity information.
Further, before extracting face parameters in the face image and corresponding the face parameters to a preset face 3D parameter model to obtain a reshaped current face 3D model, the method includes:
acquiring a background image of each preset position of the dome camera when no student exists;
based on a difference method, carrying out difference on the face image and the background image to obtain a difference image;
judging whether the image after the difference is larger than a preset threshold value or not;
if not, the current preset position is empty, and no student exists;
if so, sequentially carrying out binarization and smoothing filtering on the image after the difference to obtain a face area of the image after the difference, and extracting parameters in the face area to obtain face parameters.
Further, before acquiring the face image, the method comprises the following steps:
establishing a student sample library and a blacklist personnel sample library;
acquiring image parameters which are coded in advance, wherein the ball machine is arranged in a classroom, the image parameters comprise a classroom number, a ball machine number and a preset position number, and the preset position is a position detected by the ball machine;
and acquiring the video stream uploaded by the dome camera, and decoding the video stream to obtain a face image.
Further, the steps of transmitting the periodically collected learner learning state images to a pre-trained deep neural network model to obtain the learning behavior of each learner, recording and storing the learning behavior, include:
periodically acquiring images of students in a classroom by scanning through a ball machine, and extracting learning state images of the students in the middle area of the images;
whitening the learning state image, and inputting the whitened learning state image into a pre-trained deep neural network model, wherein the neural network model comprises a DBN (digital base network) and a softmax classifier;
the DBN carries out image processing on the learning state image to obtain characteristic parameters of the learning behaviors of the student;
classifying the characteristic parameters by using a softmax classifier to obtain the learning behavior of each student, wherein the learning behavior of each student comprises the following steps: front-looking, head-down reading, head-down writing, empty seat, making a call, playing a mobile phone, turning back, left-side looking, right-side looking and sleeping.
Further, before acquiring the learning state image, the method includes:
establishing a learning behavior classification of the student, comprising: front-looking, head-down reading, head-down writing, empty seat, making a call, playing a mobile phone, turning back, left-side looking, right-side looking and sleeping;
acquiring image parameters which are coded in advance, wherein the ball machine is arranged in a classroom, the image parameters comprise a classroom number, a ball machine number and a preset position number, and the preset position is a position detected by the ball machine;
acquiring a video stream uploaded by a dome camera, and decoding the video stream to obtain a decoded image;
and cutting the learning image of the student in the decoded image to obtain the learning state image of the student.
Further, the training process of the deep neural network model comprises:
establishing a student behavior classification training picture sample set and a verification sample set;
whitening the pictures in the picture sample set to obtain a whitened picture sample set, and dividing the whitened picture sample set into a training sample set and a verification sample set;
inputting the training sample set into the constructed deep neural network model, and acquiring weights and parameters corresponding to the DBN and softmax classifiers respectively to obtain the trained deep neural network model;
inputting the verification sample set into the trained deep neural network model to obtain the student learning behaviors corresponding to the verification sample set, and evaluating the accuracy of judging the student learning behaviors;
if the accuracy of the student learning behavior judgment is within a preset range, completing deep neural network model training;
and if the accuracy of the student's learning behavior judgment is not within the preset range, continuing to perform optimization training on the trained deep neural network model through the training sample set until the accuracy of the student's learning behavior judgment is within the preset range, and finishing the deep neural network model training.
Further, in the real-time classroom scene image that obtains in the classroom through the rifle bolt, use the rifle bolt to gather classroom scene image, and image processing carries out, obtains classroom passageway, the unusual gathering information of clear department personnel, specifically includes:
processing the classroom scene image based on a histogram enhanced defogging algorithm to obtain a processed classroom scene image;
extracting all personnel images in the processed images of the classroom aisle and the open scene based on a difference method;
processing personnel images extracted from a classroom aisle and a scene in an open place by adopting threshold processing, morphological operation and image fusion to obtain a personnel image without a hole;
extracting all personnel images in the personnel images without holes according to the contour characteristics of the head and the shoulders of the personnel, and determining the positions of all the personnel images;
calculating the distance between adjacent personnel in all personnel images according to the transmission relation of the images acquired by the gunlock;
and if the distance between adjacent people is smaller than a set value, determining that people are gathered.
Further, before the face image, the learning state image and the classroom scene image of each student are periodically acquired through the scanning of the dome camera and the classroom scene image in the classroom is acquired in real time through the gun camera, the method comprises the following steps:
determining a monitoring area of each ball machine;
setting a preset position of a ball machine and a scanning path of the ball machine;
the front face image of the whole student is recorded into a face recognition server database, and the face recognition characteristics are extracted to serve as a preset face sample library.
A remote video teaching quality assessment system based on machine vision is characterized by comprising an acquisition module, an identity matching module, a learning behavior identification module, an abnormal aggregation detection module, an automatic attendance checking module, a learning quality assessment module and an abnormal information prompt module;
the acquisition module is used for periodically acquiring the face image, the learning state image and the classroom scene image of each student through the scanning of the dome camera, and acquiring the classroom scene image in a classroom in real time through the gun camera;
the identity matching module is used for matching the periodically acquired face images with a preset face sample library after 3D conversion to obtain student identity information, and recording and storing the student identity information;
the learning behavior recognition module is used for transmitting the periodically acquired student learning state images to a pre-trained deep neural network model to obtain the learning behavior of each student, and recording and storing the learning behavior;
the abnormal gathering detection module is used for carrying out real-time image processing on the classroom scene image to obtain abnormal gathering information of personnel;
the automatic attendance module is used for counting and analyzing the recorded and stored student identity information after learning is finished, and obtaining attendance scores and comments of each student according to attendance rules to realize automatic attendance of students in remote teaching;
the learning quality evaluation module is used for counting and analyzing the recorded and stored learning behaviors of the students after learning is finished, obtaining the learning quality score and comment of each student according to the learning quality evaluation rule, and realizing the automatic evaluation of the learning quality of the remote teaching students;
the abnormal information prompting module is used for outputting alarm information to the supervision workstation to inform personnel of coming processing if abnormal personnel aggregation occurs in the teaching process according to the abnormal personnel aggregation information, so that the teaching order maintenance of remote teaching is realized.
The remote video teaching quality evaluation method and system based on machine vision provided by the invention have the advantages that: according to the remote video teaching quality evaluation method and system based on machine vision, provided by the structure, automatic statistics and analysis are carried out on the study starting time, the study finishing time, the times of entering and exiting a classroom during study and the time of the study by a student through face recognition processing, so that automatic evaluation on attendance of the student during study is realized; performing feature extraction and feature classification processing on the learning state of the student through a deep neural network model to obtain the learning behavior of the student, and distinguishing the learning state and the non-learning state of the student to realize the evaluation on the learning quality of the student; abnormal gathering of personnel in a classroom is detected through moving object detection processing, and an attendant is reminded and guided to come for processing, so that normal teaching order is guaranteed; therefore, the invention realizes the visual monitoring and intelligent evaluation of attendance checking of students, study quality evaluation and teaching order maintenance during remote teaching; the classroom discipline of the student during learning is guaranteed, and the learning quality of the student during learning is guaranteed.
Drawings
FIG. 1 is a flow chart of the steps of a method for remote video teaching quality assessment based on machine vision;
FIG. 2 is a diagram of a hardware configuration corresponding to the present invention;
FIG. 3 is a layout of a venue with 6 ball machines in a classroom;
fig. 4 is a layout diagram of a site where 2 bolt planes are set in a classroom;
FIG. 5 is a schematic flow chart of the present invention.
Detailed Description
The present invention is described in detail below with reference to specific embodiments, and in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as broadly as the present invention is capable of modification in various respects, all without departing from the spirit and scope of the present invention.
In the present application, each device in the monitoring range: the system comprises N network high-definition monitoring ball machines, two high-definition monitoring gun machines, a Network Video Recorder (NVR), a network switch, a video monitoring workstation, a student learning management workstation, a portrait recognition server, a behavior analysis server and an abnormality detection server in a classroom; as shown in fig. 2, are connected to each other through a network switch.
The gun machines collect learning scenes in a classroom in real time, the two gun machines are arranged according to diagonal lines, and the monitoring range can cover the scenes of the whole learning classroom. The Network Video Recorder (NVR) stores monitoring video streams collected by N ball machines and monitoring video streams collected by two gun cameras, and the required storage capacity (the size of the hard disk) is determined by the time length of the video to be stored. The video monitoring workstation is used for controlling the monitoring angle and the lens of each dome camera and displaying a preview video picture of all dome cameras and gun cameras or a high-definition monitoring video picture of one or more dome cameras/gun cameras.
The portrait recognition server is used for dynamically recognizing personnel identity information in the video streaming picture, and the recognized student image is associated with the classroom number, the seat number and the snapshot time of the student image and is stored in the portrait recognition server; the identified "blacklisted" persons are transmitted to a student learning management workstation for processing.
The behavior analysis server is used for analyzing the learning state of the student in the video streaming picture, judging whether the learning state is a normal learning state or an abnormal learning state, and storing the identified learning state image in the behavior analysis server in association with the classroom number, the seat number and the snapshot time.
The abnormity detection server is used for analyzing the number of students in the video stream picture and the distance between the number of students and identifying whether the condition of personnel gathering exists in the monitoring video stream, if so, the abnormity detection server sends out learning abnormity warning information to a student learning management workstation, and simultaneously stores the snapshot image, the snapshot time and the abnormity type in the abnormity detection server.
The student learning management workstation is used for counting the portrait identification data and the learning behavior data of the student, evaluating the attendance condition and the learning quality of the student according to a predetermined evaluation rule, and simultaneously sending out video/audio warning information immediately to remind a manager to go forward if receiving the person occurrence information on a blacklist sent by the portrait identification server or the learning abnormity information sent by the abnormity detection server.
During the duration of the whole teaching activity, the portrait recognition server and the behavior analysis server periodically work in a circulating way, the trainees corresponding to the preset positions of the ball machine are subjected to portrait recognition and behavior analysis, and the results are recorded in the corresponding databases; the abnormity detection server works circularly, and alternately reads the video streams collected by 2 gunlocks to judge whether the abnormal conditions exist in the classroom.
When the teaching video is played, the teaching activity is ended; and the ball machine stops scanning, and the portrait recognition server, the behavior analysis server and the abnormality detection server stop working.
After the teaching activity is finished, the student learning management workstation reads the recorded data from the portrait recognition server database, scores the attendance condition of each student according to the attendance rules and gives comments so as to realize automatic attendance of the students in remote teaching; meanwhile, the student learning management workstation reads recorded data from the behavior analysis server database, scores the learning quality of each student according to the learning quality evaluation rule and gives comments so as to realize the learning quality evaluation of the remote teaching students.
The video teaching and the remote network teaching are implemented by adopting a television teaching mode, namely, the video teaching and the remote network teaching contents are transmitted to a large-screen television in a classroom for playing, and a student finishes a learning task by watching the television, as shown in fig. 3; the classroom is provided with 5 large-screen liquid crystal display televisions, and the teaching task is completed by a student watching a nearby television.
The high-definition monitoring ball machine is arranged on the roof of a classroom and faces students, the video acquisition area is set to accurately identify the portrait of the students and identify the learning behavior as a boundary, and the video acquisition area of each high-definition monitoring ball machine can cover all students in class after being overlapped; fig. 3 has laid 6 high definition control ball machines in the classroom, and every ball machine corresponds certain monitoring area, and if the monitoring area that ball machine 1 corresponds is I, the monitoring area that ball machine 2 corresponds is II, etc. can cover the student in all classrooms after the monitoring area stack of 6 ball machines.
After the high-definition monitoring dome camera is installed, testing and recording a preset position of the high-definition monitoring dome camera before the high-definition monitoring dome camera is used; generally, the preset positions (alpha) including the horizontal rotation angle alpha, the vertical pitch angle beta and the lens focal length f of the dome camera are set by how many learning seats/trainees are in the monitoring area of each dome camera (alpha)ij、βij、fij) Is shown, in which: i is 1, 2, …, N, j is 1, 2, … and Mi, N is the number of monitoring ball machines installed in a classroom, and Mi is the total number of seats or students in the monitoring area of the ith ball machine.
When the preset position is used for collecting, the monitored learning seat/student is placed in the middle of a collected picture by adjusting the horizontal rotation angle alpha and the vertical pitch angle beta of the ball machine; the lens focal length f of the ball machine is adjusted to ensure that the monitored pictures of the learning seat/the student are large enough, so that the identity information identification and the learning behavior analysis of the student are facilitated; thus, a preset (α, β, f) of the monitored learning seat/trainee can be obtained; by passing
Figure BDA0002737433760000091
All preset positions (alpha) of N ball machines can be obtained by secondary collectionij、βij、fij),i=1、2、…、N,j=1、2、…、Mi。
After the collection of the preset positions is completed, each learning seat/student in the classroom corresponds to one preset position of one of the monitoring ball machines, namely, the preset positions of the monitoring ball machines and the learning seats/students in the classroom have one-to-one correspondence.
When the intelligent monitoring system works, each dome camera scans each learning seat/student in a monitoring area at regular time (T), a scene state video of each dome camera is collected, the video is used for the portrait recognition server to recognize the portrait of the student and the behavior analysis server to analyze the behavior of the student, the scanning time of each learning seat/student is T/Mi, namely the identity information and the learning behavior of the student obtained in the T/Mi scanning time are taken as the identity information and the learning behavior of the student in the whole period T; in actual work, for example, T is selected to be 60s, if the total number of the learning seats/trainees in a monitoring area of a certain dome camera is 12, the video scanning/sampling time of each learning seat/trainee is 5s, and the identity information and the learning behavior of the trainee are identified and analyzed from the collected video of the 5s and are used as the identity and the behavior of the trainee in the sampling period 60 s.
Specifically, as shown in fig. 1 to 4, the method for evaluating the quality of remote video teaching based on machine vision according to the present invention includes S0 to S7:
s0: when learning begins, sending a working instruction to a ball machine, and starting the ball machine and a gun camera to start scanning; and sending a working instruction to a portrait recognition server, a behavior analysis server and an abnormality detection server, and starting the work of portrait recognition, learning behavior judgment and abnormality aggregation detection.
S1: and periodically acquiring images to be recognized, wherein the images to be recognized comprise face images, learning state images and classroom scene images. Wherein S1 includes S1-1 to S1-2:
s1-1: periodically acquiring a face image and a learning state image of each student through ball machine scanning;
the face image and the learning state image are images acquired by a dome camera arranged in a classroom. The ball machine uploads the face image to a portrait recognition server for image recognition so as to complete automatic attendance of students; and the ball machine uploads the learning state image to a behavior analysis server for behavior analysis so as to finish the learning quality evaluation of the student.
S1-2: acquiring classroom scene images in a classroom in real time through a gun camera;
the classroom scene image is an image obtained by a gun camera arranged in a classroom. And uploading the classroom scene image to an abnormal gathering analysis server by the gunlock to finish abnormal gathering judgment of the trainees.
S2: matching the periodically acquired face images with a preset face sample library after 3D conversion to obtain student identity information, and recording and storing the student identity information;
the face recognition server performs image processing and image comparison on the acquired face image to obtain whether the current face image corresponds to a preset student or not, and obtains a recognition result, wherein the results recognized by the face recognition server are blank, student, blacklist and unidentified, and are jointly recorded in a face recognition server database together with a picture for recognition, recognition time, a received classroom number, a dome camera number and a preset bit number. And if the recognition result is 'blacklist personnel', sending warning information and blacklist personnel information to the student learning management workstation.
S3: the method comprises the steps that (1) periodically acquired learner learning state images are transmitted to a pre-trained deep neural network model, and the learning behavior of each learner is obtained, recorded and stored;
and the learning behavior server performs image processing and image classification on the acquired learning state image to obtain the learning behavior of the current learning learner.
S4: performing real-time image processing on a classroom scene image to obtain abnormal personnel gathering information;
the abnormity detection server carries out image processing on the classroom scene image to judge whether the abnormal gathering condition of the personnel exists.
S5: after learning is finished, the recorded and stored student identity information is counted and analyzed, attendance scores and comments of students are obtained according to attendance rules, and automatic attendance of the students in remote teaching is achieved.
The portrait recognition server uploads the recorded and stored student identity information to a student learning management workstation, and the student learning management workstation counts and analyzes the student identity information at different times and outputs attendance scores and comments of students.
S6: after learning is finished, the recorded and stored learning behaviors of the students are counted and analyzed, learning quality scores and comments of the students are obtained according to the learning quality evaluation rules, and automatic evaluation of the learning quality of the remote teaching students is achieved.
The learning behavior server uploads the recorded and stored learning behaviors to the student learning management workstation, and the student learning management workstation counts and analyzes the learning behaviors at different times and outputs learning quality scores and comments of the students.
S7: in the teaching process, abnormal gathering information of personnel is detected, if the abnormal gathering of the personnel occurs, warning information is output to a supervision workstation, and an operator on duty is informed to come for processing, so that the teaching order maintenance of remote teaching is realized.
The abnormity detection server uploads the abnormal personnel gathering information to the student learning management workstation, the student learning management workstation processes the abnormal personnel gathering at different time in real time, reminds and guides the person on duty to process the abnormal personnel gathering in the modes of sound, light, information frames and the like, and notes gathering conditions, gathering time, gathering times and the like in the abnormal gathering evaluation table of the student.
Through the circulation of the steps S0 to S7, the identities of the students in video teaching, television teaching and network distance teaching are dynamically identified in real time, the situations of 'learning replacement' and 'learning replacement' in some training and learning processes can be effectively solved, the standard unification of automatic attendance checking of the students is realized, scientific and normative evaluation on the learning quality and the situation of complying with the teaching discipline of the students is facilitated, and the evaluation is not influenced by human factors.
Specifically, as shown in fig. 1 and 5, for the implementation of automatic attendance checking by the trainee, the following detailed steps S100 to S212 may be adopted:
s100: establishing a student sample library and a blacklist personnel sample library, and storing the student sample library and the blacklist personnel sample library in a portrait recognition server;
specifically, before formal teaching begins, a student sample library and a blacklist personnel sample library which participate in learning are established; collecting the front face portrait of each student participating in learning, extracting the portrait picture characteristic value of each student, and recording the portrait picture characteristic value in a portrait recognition server student sample library; each image corresponds to a student, and basic information (such as name, gender, identification number, contact telephone, school number and the like) of the image corresponds to the student and is also recorded in the portrait recognition server; before each learning, refreshing all student portrait sample libraries and characteristic values thereof participating in the learning, deleting students who do not learn due to leave requests, industry accomplishment and the like, and adding front portraits, characteristic values and basic information of newly-added students; establishing a blacklist of illegal behavior personnel such as 'surrogate science' and 'surrogate science', collecting the front face portrait of each 'blacklist' personnel, extracting the characteristic value of the front face portrait, and recording the characteristic value, the name, the gender, the identification card number, the contact telephone and the like in a blacklist sample library of a portrait recognition server.
S120: the method comprises the steps that a portrait recognition server obtains a pre-coded classroom number, a dome camera number and a preset position number, and simultaneously obtains a face image uploaded by a dome camera, wherein the dome camera is arranged in a classroom, and the preset position is a position detected by the dome camera;
and the portrait recognition server receives the classroom number, the dome camera number and the preset position number sent by the video monitoring workstation. And simultaneously, acquiring video streams collected by the dome camera, decoding the video streams into a frame of image for subsequent processing, and taking the frame of image as an image to be identified.
Specifically, a plurality of dome cameras are arranged in a classroom, each dome camera corresponds to a certain monitoring area, for example, the monitoring area corresponding to the dome camera 1 is I, the monitoring area corresponding to the dome camera 2 is II, and the like, and the monitoring areas of all the dome cameras can cover students in all the classrooms after being superposed.
After the video monitoring workstation receives a teaching start command, each ball machine in a classroom is controlled to scan each student or seat for a fixed period according to a preset position which is set in advance, each scanning is carried out every minute, and video streams of a designated area of the classroom are collected to obtain images to be identified. The collected video stream is transmitted to a network hard disk video recorder for recording and storing on one hand, and is transmitted to a portrait recognition server for processing on the other hand. And after receiving the instruction of 'teaching end', the video monitoring workstation controls to stop the scanning work of all the monitoring ball machines.
S130: the method comprises the steps that a portrait recognition server obtains a background image of each preset position uploaded by a dome camera when no student exists;
the background image is the image of each preset position/seat without the trainee.
S140: the image recognition server utilizes a difference method to carry out difference on the face image and the background image to obtain a difference image;
the existing difference algorithm is adopted to detect whether a student is on the seat or not, and if the student is on the seat, the image of the student can be directly acquired, so that the influence of the background of the student is reduced.
S150: the image recognition server judges whether the image after the difference is larger than a preset threshold value, if not, the step S160 is carried out, and if so, the step S170 is carried out;
the threshold value corresponds to whether a face image exists or not.
S160: the current preset position is null, and no student exists;
when the student is absent, the current seat is indicated to be an empty seat, and the portrait recognition server can directly feed back the "empty" seat to the student's Schaeering management workstation.
S170: the portrait recognition server carries out binarization and smooth filtering processing on the image after the difference in sequence to obtain a face area of the portrait in the image after the difference, and the step S80 is carried out;
carrying out binarization processing on the image after the difference so as to eliminate background residue; and then, carrying out smooth filtering on the image to remove noise on the image, and then finding out gradient changes of the number of pixels of the pixel points in the image in the horizontal direction and the vertical direction, so as to determine a face region of the person in the image after the difference.
It should be understood that "binarization" and "smoothing" as used in this application may both employ existing methods to determine the region of the portrait in the image.
S180: the face recognition server positions key feature points of the face according to the face parameters in the determined face area;
the human face has high structural characteristics, contour points of the human face image, such as eyes, nose tips, mouth corner points, eyebrows and the like, can be extracted according to the human face region of the human image in the image after difference, and key facial characteristic points of the current student are positioned; the extraction of the face contour points of the human face can be performed in the existing way.
S190: the human face recognition server enables the key characteristic points to correspond to a preset human face 3D parameter model, and carries out scanning reconstruction on the human face image to obtain a current human face 3D model corresponding to the human face image;
the face has high structural characteristics, and key characteristic points of the face are positioned according to face image contour points (face parameters) in an image to be recognized, such as eyes, nose tips, mouth corner points, eyebrows and the like; and scanning and reconstructing the face image to obtain a face model, and then performing Gaussian distribution fitting on face parameters by using a Principal Component Analysis (PCA) method to construct a current face 3D model. The human face 3D parameter model is a preset universal human face model.
S200: extracting a face front image in the current face 3D model to obtain face front image characteristics;
after the current face 3D model is obtained, a 2D portrait picture at any angle can be obtained through rotary projection; therefore, although the angles of the ball machine and each student are different in the collected video image, or the sitting postures of the students are different, the angles of the faces of the students in the picture are also different; however, according to the current face 3D model, the front image of the student can be obtained, and then the characteristic value of the student relative to the front image is extracted.
According to the current human face 3D model corresponding to the current student, a 2D image of a human face can be obtained in each direction, in order to improve the accuracy of comparison with preset human faces in a student sample library and a blacklist human sample library, the human face front image characteristics of the current human face 3D model are obtained, and the accuracy of human image recognition is improved; the defect of poor accuracy in the traditional method for directly comparing the taken 2D images is avoided, because the 2D images acquired in the traditional method are not necessarily front images, but can be side images, and the outline of the face cannot be completely expressed.
S210: matching the face front image with a preset face sample library to obtain student identity information;
comparing the extracted characteristic value of the front face image of the student with the characteristic values in a student sample library of all students, if the root mean square error of the characteristic values of the face image of the collected front face image and the face image of a student in the student sample library is minimum and is within a threshold range allowed by the error, considering that the student is a person corresponding to the front face image, namely the student who learns by sitting on a preset position seat corresponding to the dome camera, and recording the recognition result in a database of a personal image recognition server; if the minimum root mean square error exceeds the allowable threshold value, the personnel image characteristic value is compared with the blacklist personnel sample library, if the comparison is successful, the figure recognition server feeds back blacklist personnel information to the student learning management workstation, otherwise, the figure recognition server feeds back unidentified data to the student learning management workstation.
S211: the steps S120 to S210 are cycled regularly until the teaching activity is finished, and the portrait recognition server respectively stores the recognition results;
the result "blank", "student", "blacklist person" and "unidentified" recognized by the portrait recognition server is recorded in the portrait recognition server database together with the image to be recognized for recognition, the recognition time, the received classroom number, the dome camera number and the preset position number; and if the recognition result is 'blacklist personnel', sending warning information and blacklist personnel information to the student learning management workstation.
The student learning management workstation sends out a flashing picture, stimulating sound, a corresponding classroom number and a seat number after receiving the warning information of the blacklist personnel sent by the portrait recognition server, and reminds the manager of paying attention and going forward to process.
S212: the student learning management workstation counts attendance data of each student according to identification results stored in the portrait identification server database, obtains an attendance checking and evaluation table of the student according to a set rule, and realizes automatic attendance checking of the student for remote teaching;
after the teaching activity is finished, namely after a teaching finishing command is sent out, the student learning management workstation extracts data records sent by the face recognition server of each student from the face recognition server database, one record represents the attendance condition of one student in one minute, and the counted attendance data of each student are evaluated according to the rules formulated in the table 1; therefore, the student learning management workstation completes statistics on the attendance condition of each student and gives attendance scores and comments.
TABLE 1 attendance scoring sheet for trainees
Figure BDA0002737433760000151
Figure BDA0002737433760000161
The face images acquired and processed in steps S100 to S212 are all 2D images, the 2D images are not necessarily positive photographs of the whole face, and if the face images are directly compared with a preset student sample library and a blacklist person sample library, there may be a certain difference, thereby affecting the final comparison result. Therefore, according to the method, firstly, a computer can construct and store a human face 3D parameter model, and obviously, the human face 3D parameter model and the human face of a student have certain difference, and the difference is mainly reflected on contour points (such as eyes, nose tips, mouth corner points, eyebrows and the like) of a human face image, so that the 3D parameterized human face model corresponding to the contour point parameters can be obtained by reconstructing the contour points on a standard human face 3D model and carrying out Gaussian distribution fitting on data by using a Principal Component Analysis (PCA), and the 3D human face of the current student image is constructed, so that the accuracy of automatic attendance of the student is improved.
Therefore, the image to be recognized is acquired once per minute (not limited to one minute) by the dome camera, then whether the student is in the position per minute or not is obtained, and the deduction is carried out according to the deduction condition in the table 1, and finally the purpose that the face image is used for attendance of the student is achieved.
As shown in fig. 1 and 5, for the implementation of the trainee learning quality assessment, the following detailed steps S001 to S004 may be adopted:
s001: the behavior analysis server receives image parameters sent by a video monitoring workstation, wherein the image parameters are a pre-coded classroom number, a pre-coded dome camera number and a pre-set bit number, and meanwhile, images of students in the classroom are periodically collected through scanning of the dome camera, the students in the middle area of the images are extracted as learning state images, the dome camera is arranged in the classroom, and the pre-set bit is a position detected by the dome camera;
specifically, a plurality of dome cameras are arranged in a classroom, each dome camera corresponds to a certain monitoring area, for example, the monitoring area corresponding to the dome camera 1 is I, the monitoring area corresponding to the dome camera 2 is II, and the like, and the monitoring areas of all the dome cameras can cover students in all the classrooms after being superposed.
The behavior analysis server receives a classroom number, a dome camera number and a preset bit number sent by a video monitoring workstation, receives a video stream collected by a high-definition monitoring dome camera, decodes the video stream into a frame of image for subsequent processing, cuts the frame image, namely cuts out a student learning picture located at the center of the picture, wherein the size of the cut-out picture is generally fixed and is 256 × 256 pixels or 512512 pixels, the cut-out picture is obtained, and the cut-out picture is used as a learning state image.
After receiving a teaching start command, the video monitoring workstation controls each ball machine in a classroom to scan each student or seat for a fixed period according to a preset position set in advance, scans every minute once, and collects video streams in a designated area of the classroom to acquire a learning state image. The collected video stream is transmitted to a network hard disk video recorder for recording and storing on one hand, and is transmitted to a behavior analysis server for processing on the other hand. And after receiving the instruction of 'teaching end', the video monitoring workstation controls to stop the scanning work of all the monitoring ball machines.
S002: the behavior analysis server performs whitening processing on the learning state image to obtain a whitened learning state image;
the learning state image is subjected to whitening processing in order to reduce the correlation between the picture pixels and make all the features thereof have the same variance, wherein the whitening processing adopts the image processing method of the prior art.
S003: the behavior analysis server inputs the whitened learning state image into a pre-trained deep neural network model to output and obtain the learning behavior of each student, wherein the neural network model comprises a DBN (database-based network) and a softmax classifier;
the DBN carries out image processing on the learning state image to obtain characteristic parameters of the learning behaviors of the student; and classifying the characteristic parameters by the softmax classifier to obtain the learning behavior of each student.
S004: the steps S001 to S003 are circulated, the learning behaviors of each student at different time are obtained, the obtained learning behaviors are stored in a behavior analysis server, after the teaching is finished, the learning behaviors and the image parameters are uploaded to a student learning management workstation by the behavior analysis server, and the student learning management workstation outputs a learning quality evaluation table of the student to realize the learning quality evaluation of the remote teaching student;
the learning behaviors include: "look ahead forward", "look at book with low head", "write with low head", "empty seat", "make a call", "play cell-phone", "turn around", "left side look", "right side look", "sleep" normal and abnormal learning behaviors, wherein normal learning behaviors: front view, low head reading, low head writing, abnormal behavior: the seat is empty, the call is made, the mobile phone is played, the head returns, the left side view, the right side view and the sleep.
The student learning management workstation presets the learning behavior classification of the student. After the teaching activity is finished, namely after a teaching finishing command is sent, the student learning management workstation extracts data records sent by the behavior analysis server of each student from the behavior analysis server database, one record represents the learning behavior of one student within one minute, and the student learning management workstation compares the obtained learning behavior of each student with preset learning behavior classes to obtain the classes of the current learning behaviors of the students; after the statistics of the learning behaviors of each student is completed, the learning quality of each student is statistically evaluated according to the rules formulated in the table 2; therefore, the student learning management workstation realizes the evaluation of the learning quality of each student and provides the learning quality score and the comment of each student.
Table 2 student learning quality evaluation table
Figure BDA0002737433760000181
Figure BDA0002737433760000191
The behavior analysis server is an identification and classification server constructed through a deep neural network model, after receiving a teaching start command, the behavior analysis server extracts pictures from a transmitted video stream for processing, identifies the learning behaviors of students on the pictures, and after obtaining the classification of the learning behaviors of the students, transmits the learning behaviors of the students, the time during identification, the corresponding seat numbers (corresponding to the ball machine numbers/preset seat numbers one by one) and the pictures for identification to a student learning management workstation and records the learning behaviors, the time during identification, the seat numbers and the pictures in a student learning quality evaluation database. After receiving the 'teaching end' command, the behavior analysis server stops the recognition work of the learning behavior of the student and does not transmit any data to the learning management workstation of the student.
According to the steps S001 to S004, the learning behaviors of the students are judged and classified through the deep neural network model, the learning behaviors of the same student in different time periods can be obtained, after the teaching is finished, the analysis server uploads the learning behaviors stored in the shortened time to the student learning management workstation, and the student learning management workstation gathers the learning behaviors and compares the gathered learning behaviors with the set student quality evaluation rule to realize the learning quality evaluation of the remote teaching students.
Specifically, the training process of the deep neural network model includes steps S201 to S206:
s201: establishing a picture sample set and a verification sample set for student behavior classification training;
the picture sample set is classified into normal and abnormal learning behaviors such as front-looking, book-reading with heads lowered, writing with heads lowered, empty seats, making a call, playing a mobile phone, turning back, left-side-looking, right-side-looking and sleeping.
S202: whitening the pictures in the picture sample set to obtain a whitened picture sample set, and dividing the whitened picture sample set into a training sample set and a verification sample set;
the whitening processing is mainly to process the pictures in the picture sample set by adopting a Principal Component Analysis (PCA) method so as to reduce the correlation among the picture pixels and enable all the characteristics of the pictures to have the same variance, so that the performance of a softmax classifier applied afterwards is improved.
S203: inputting the training sample set into the constructed deep neural network model, and acquiring weights and parameters corresponding to the DBN and softmax classifiers respectively to obtain the trained deep neural network model;
the pictures in the training sample set, namely the pictures of normal and abnormal learning behaviors such as ' front view, ' book reading with head lowering ', ' writing with head lowering ', ' empty seat ', ' call ', ' mobile phone play ', ' turn back ', ' left side view ', ' right side view ', ' sleep ' and the like are used for training the DBN and the softmax classifier alternately by adopting an unsupervised learning method and a supervised learning method, and the corresponding weight and parameters of the DBN and the softmax classifier are obtained.
Constructing a student learning behavior classification model based on a Deep Belief Network (DBN) and a softmax classifier, wherein the DBN is used for extracting characteristic values of the student learning behaviors in the image, and the softmax is used for classifying the extracted characteristic values; the DBN adopts a convolution depth confidence network (CDBN) with two hidden layers to extract the characteristic parameters of the learner learning behaviors on the image samples, and adopts a softmax classifier to classify the characteristic parameters extracted from the images in the training sample set.
S204: inputting the verification sample set into the trained deep neural network model to obtain student learning behaviors corresponding to the verification sample set, and judging the student learning behaviors;
and verifying the accuracy of the extraction and classification of the obtained learning behavior characteristics of the trainees by using the verification sample set.
S205: if the learning behavior of the student is within a preset range, finishing training of the deep neural network model;
s206: if the learning behavior of the student is not in the preset range, continuing training the trained deep neural network model through the training sample set, and optimizing the weight values and parameters of the CDBN network and the softmax classifier until the learning behavior of the student is in the preset range.
If the sample set is used up in the using process, but the learning behavior of the student is not in the preset range, the picture sample set of the student behavior classification training is reestablished, whitening processing is carried out on the picture simultaneously to obtain the whitened picture sample set, the whitened picture sample set can be divided into a training sample set and a verification sample set according to actual needs to finish training and verification of the deep neural network model, and finally the weight and the parameters of the CDBN network and the softmax classifier which are applicable to the application are obtained. And then putting the trained deep neural network model into use.
The presence of abnormality in the classroom in which the gathering of the trainee is present can be detected by setting a bolt in the classroom, and a specific abnormality detection device is set as follows: two monitoring cameras are arranged in a classroom according to diagonal lines, as shown in fig. 4, the two monitoring cameras realize full coverage of a monitoring range of the classroom, no monitoring dead angle exists, and two network high-definition monitoring guns are adopted during actual arrangement. The two gunplanes transmit the collected classroom monitoring video stream to a Network Video Recorder (NVR) for recording and storing on one hand, and transmit the classroom monitoring video stream to a video monitoring workstation for displaying and previewing on the other hand; and on the other hand, classroom monitoring video streams collected by two guns are sent to an abnormality detection server for video image processing, and whether personnel gathering conditions exist or not is analyzed.
The abnormity detection server processes the transmitted two paths of classroom monitoring video streams after receiving a teaching start command sent by a student learning management workstation, transmits abnormal information to the student learning management workstation for processing after detecting the abnormal condition of personnel gathering, and simultaneously stores and records the detected pictures, the corresponding monitoring gun number and the processing time in an abnormity detection server database; and after receiving a teaching ending command sent by the student learning management workstation, the abnormality detection server stops processing the video stream sent by the two paths of gunlocks.
The anomaly detection server adopts a multi-thread working mode, each path of input video stream corresponds to one processing site, as shown in fig. 1 and 5, the steps of the anomaly detection server during actual working are as follows, i.e., S301 to S307:
s301: the abnormality detection server acquires classroom scene images of the whole classroom through a gun camera in the classroom;
the rifle bolt uploads a video stream to the abnormality detection server, and the video stream is decoded to obtain a frame of image which is used as an abnormality detection image.
S302: processing the classroom scene image based on a histogram enhanced defogging algorithm to obtain a processed classroom scene image;
the histogram enhancement defogging algorithm can adopt the existing image processing algorithm and is used for eliminating image blurring caused by fog so as to improve the definition of an image.
S303: extracting all personnel images in the processed images of the classroom aisle and the open scene based on a difference method;
with the background image acquired in step S130, the currently processed frame image and the background image are subtracted by the corresponding pixels, so as to obtain a staff image of the trainee, where the staff image may be a full-body photograph of the staff or at least a photograph of the head and the shoulders.
S304: processing personnel images extracted from a classroom aisle and a scene in an open place by adopting threshold processing, morphological operation and image fusion to obtain a personnel image without a hole;
the threshold processing, morphological operation and image fusion methods used in the present application may all employ existing image processing methods. The personnel image is full and has no holes after the methods of threshold processing, morphological operation and image fusion are adopted.
S305: extracting all personnel images in the personnel images without holes according to the contour characteristics of the head and the shoulders of the personnel, and determining the positions of all the personnel images;
s306: calculating the distance between adjacent personnel in all personnel images according to the transmission relation of the images acquired by the gunlock;
it should be understood that the transmission relationship of the collected images of the bolt is the property of the bolt itself, and the projection relationship may be different according to different specifications of the bolt.
S307: if the distance between adjacent persons is less than a set value, determining that the person group exists
And if the distance between two adjacent persons is smaller than the set value, the two persons are considered to be gathered. If the distances between three continuous students are smaller than a set value, the three students are considered to gather, and at the moment, the abnormal condition exists in the classroom, and yellow early warning information is sent out. If the continuously gathered students exceed five persons, red early warning information is sent out; when the yellow and red early warning information is sent out, the corresponding video picture, the corresponding monitoring bolt number, the processing time and other data are stored for later checking.
The student learning management workstation sends out a flashing picture and a corresponding classroom number when receiving yellow early warning information, and sends out a flashing picture, stimulating sound and a corresponding classroom number when receiving red early warning information, so as to remind a manager to pay attention and go forward.
According to the steps S301 to S307, after receiving a teaching start command sent by a student learning management workstation, the abnormity detection server processes the transmitted two paths of classroom monitoring video streams, transmits abnormal information to the student learning management workstation for processing after detecting the abnormal condition of personnel gathering, and simultaneously stores and records the detected pictures, the corresponding monitoring gun number and the processing time in an abnormity detection server database; and after receiving a teaching ending command sent by the student learning management workstation, the abnormality detection server stops processing the video stream sent by the two paths of gunlocks. The state of the student in the classroom is detected through the gunlock, and abnormal detection of personnel transaction is achieved.
A remote video teaching quality assessment system based on machine vision comprises an acquisition module, an identity matching module, a learning behavior identification module, an abnormal aggregation detection module, an automatic attendance module, a learning quality assessment module and an abnormal information prompt module;
the acquisition module is used for periodically acquiring the face image, the learning state image and the classroom scene image of each student through the scanning of the dome camera, and acquiring the classroom scene image in a classroom in real time through the gun camera;
the identity matching module is used for matching the periodically acquired face images with a preset face sample library after 3D conversion to obtain student identity information, and recording and storing the student identity information;
the learning behavior recognition module is used for transmitting the periodically acquired student learning state images to a pre-trained deep neural network model to obtain the learning behavior of each student, and recording and storing the learning behavior;
the abnormal gathering detection module is used for carrying out real-time image processing on the classroom scene image to obtain abnormal gathering information of personnel;
the automatic attendance module is used for counting and analyzing the recorded and stored student identity information after learning is finished, and obtaining attendance scores and comments of each student according to attendance rules to realize automatic attendance of students in remote teaching;
the learning quality evaluation module is used for counting and analyzing the recorded and stored learning behaviors of the students after learning is finished, obtaining the learning quality score and comment of each student according to the learning quality evaluation rule, and realizing the automatic evaluation of the learning quality of the remote teaching students;
the abnormal information prompting module is used for detecting abnormal gathering information of personnel in the teaching process, outputting alarm information to the supervision workstation if the abnormal gathering of the personnel occurs, informing an operator on duty to carry out treatment in the future and realizing the teaching order maintenance of remote teaching.
In summary, the present invention achieves the following advantages:
(1) the video teaching, the television teaching and the network transport teaching can evaluate the automatic attendance and the intelligent learning quality of the students, the attendance standard and the learning quality evaluation standard are unified, scientific and standard evaluation on the learning quality of the students and the condition of observing the teaching discipline is facilitated, and the evaluation is not influenced by human factors.
(2) The system and the method realize automatic attendance and intelligent learning quality assessment of students in video teaching, television teaching and network distance teaching, greatly save managers in the teaching process and realize scientific and technological strong teaching and scientific and technological assistant teaching; the attendance and learning quality evaluation of the student are dynamic and run through the whole learning process of the student, so that the student can keep a good learning state and abide by the teaching discipline in the whole learning process;
(3) the invention dynamically identifies the identity of the student in real time during video teaching, television teaching and network distance teaching, and can effectively solve the 'learning-as-is' and 'learning-as-alternative' situations in some training and learning processes;
(4) the system has the function of 'abnormity detection' in video teaching, television teaching and network distance teaching, and can automatically send alarm information to managers at the first time when abnormal conditions such as student gathering occur in the teaching process, so as to remind the managers to process and avoid the expansion of abnormal conditions;
(5) the invention stores all video streams collected by a high-definition monitoring ball machine and a gun camera in a network hard disk video recorder, images, time and pictures of student learning behaviors of students on attendance every time are stored in a server for future reference, pictures in abnormal conditions are also stored in the server for future reference, so that a fact support is provided for objectively and fairly performing attendance and learning quality evaluation on the students, and the stored pictures and video streams in abnormal conditions also provide a powerful technical support for rapidly positioning and checking the causes and development of teaching events.
It should be noted that, in the automatic attendance and learning quality assessment of the present application, the number of the ball machines is 6, the number of the gun units is 2, the trainees acquire knowledge through the lcd television, and the components can be selected according to the following models, but the present invention is not limited thereto.
The Haokawav vision DS-2DF8225 is selected as the ball machine, is a 200-ten-thousand-pixel network high-definition high-speed intelligent ball machine, supports the output of a maximum 1920 x 1080@30fps high-definition picture, has a horizontal rotation angle of 360 degrees and a vertical direction rotation angle of-20-90 degrees, supports 300 preset positions and 8 flight line scans, has a horizontal rotation speed of 280 degrees/s and a vertical rotation speed of 250 degrees/s, and can meet the requirements of scanning and snapshotting the portrait and behavior of a student in a classroom.
The gun camera is a 200-million-pixel and 1/2.8-pixel network high-definition camera which selects Haokwev DS-2CD4024F, supports smooth setting of code streams, can adapt to different requirements on image quality and fluency in different scenes, and also meets the requirements on video monitoring of two education and learning classrooms.
Video streams acquired by a dome camera and a gunlock are stored in a network hard disk video recorder, the network hard disk video recorder selects Haokangwei DS-7908N, can be connected with network cameras which accord with ONVIF, PSIA and RTSP standards and a plurality of mainstream manufacturers, supports the self-adaptive access of the front end of H.265 and H.264 codes, is internally provided with 8/16/16 IPC direct connection PoE network interfaces, supports the IPC plug-and-play function, supports IPC centralized management, comprises functions of IPC parameter configuration, information import/export, voice talkback, upgrading and the like, supports the preview, storage and playback of 4K high-definition network videos, and meets the requirements of storage and playback of high-definition video streams acquired by a multi-path high-definition dome camera and a multi-path high-definition gunlock in classroom video monitoring.
The network switch selects the new generation of green energy-saving full-gigabit high-performance Ethernet switch which is S5720S-28P-LI and is released by the company, meets the requirements of large bandwidth access and Ethernet multi-service convergence, has a high-capacity, high-reliability and high-density gigabit port, can provide gigabit uplink, supports EEE energy efficiency Ethernet and iStack intelligent stacking, and meets the networking requirements of students on learning attendance and learning quality evaluation.
The hardware of the portrait identification server takes an image processor as a core, adopts NVIDIA GeForce RTX 2080Ti and 1635MHz display cards, adopts Intel i7-7700HQ, 4 cores 8 threads and main frequency 2.8G as a main CPU, has 8GB and DDR4 as internal memories and 128GB storage capacity, and is matched with dynamic portrait identification software, so that 6 paths of high-definition video streams can be processed in real time, the portrait of a student and identity information of the student can be accurately identified, the portrait of an illegal and illegal person and identity information of the illegal and illegal person in a blacklist can be identified, alert information can be sent to a student learning management workstation, and the requirement of dynamic identification of two educational studies on the identity information of the student is met.
The behavior analysis server adopts an image processor as a core, an NVIDIA GeForce RTX 2080Ti and a 1635MHz display card, a main CPU adopts Intel i7-7700HQ, 4-core 8 threads and main frequency 2.8G, a memory is 8GB and DDR4, the storage capacity is 128GB, and behavior analysis software during theoretical learning of students is matched, so that 6 paths of high-definition video streams can be processed in real time, various behaviors during learning of the students can be accurately identified, and the requirements of educational learning on behavior analysis of the students are met.
The abnormity detection server takes an image processor as a core, adopts NVIDIA GeForce RTX 2080Ti and 1635MHz display cards, adopts Intel i7-7700HQ, 4-core 8 threads and main frequency 2.8G as a main CPU, has 8GB and DDR4 as internal memory and 128GB storage capacity, is matched with personnel gathering and gathering person number video detection software, can process a gunlock high-definition video stream in real time, accurately identifies whether the students in a monitoring video gather and the number of the gathered students and sends alarm prompt information to a student learning management workstation, and meets the requirement of two education learning on detecting the abnormal condition of the gathering of the students in a classroom.
A desktop computer A530-B016 associated with a start day is selected as the video monitoring workstation, a CPU is A6 Pro-8570, a main frequency is 3.8GHz, a memory capacity is 16G, a hard disk capacity is 1T, a 19.5-inch wide-screen LED liquid crystal display is adopted, and monitoring video management platform software is matched, so that the requirements of setting and controlling preset positions of 6 network high-definition monitoring dome cameras and previewing, displaying, storing, inquiring and replaying video streams collected by the 6 network high-definition monitoring dome cameras and the 2 network high-definition monitoring gun cameras during education and learning can be met.
A student learning management workstation selects an associated day A530-B016 desktop computer, a CPU is A6 Pro-8570, a main frequency is 3.8GHz, a memory capacity is 16G, a hard disk capacity is 1T, a 19.5-inch wide-screen LED liquid crystal display is adopted, the requirements of controlling a portrait recognition server, a behavior analysis server and an abnormity detection server during education and learning can be met, the attendance condition of the student during learning is subjected to statistical analysis and scoring and comment are given, the behavior condition of the student during learning is subjected to statistical analysis and the learning quality scoring and comment are given, and an audio/video warning signal is given when the blacklist personnel information sent by the portrait recognition server is received and the aggregation abnormity condition information sent by the abnormity detection server is received.
As shown in fig. 3, each ball machine corresponds to a certain monitoring area, the number of monitored students is 12, each ball machine needs to set 12 preset positions in advance, and the video scanning period of the ball machine for each student is 1 minute, so that in each scanning period, the video acquisition time of the students is 5 seconds, and the image recognition server and the behavior analysis server recognize the corresponding students and the learning behaviors thereof by using the 5-second videos. Meanwhile, 2 gun cameras are arranged in a classroom, as shown in fig. 4, and are used for monitoring abnormal conditions of students in the classroom, namely, video streams acquired by the gun cameras are uploaded to an abnormal detection server to detect the abnormality of the students.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. A remote video teaching quality assessment method based on machine vision is characterized by comprising the following steps:
the method comprises the steps of periodically obtaining face images and learning state images of students through ball machine scanning, and obtaining classroom scene images in a classroom in real time through a gun camera;
matching the periodically acquired face images with a preset face sample library after 3D conversion to obtain student identity information, and recording and storing the student identity information;
the method comprises the steps that (1) periodically acquired learner learning state images are transmitted to a pre-trained deep neural network model, and the learning behavior of each learner is obtained, recorded and stored;
performing real-time image processing on a classroom scene image to obtain abnormal personnel gathering information;
after learning is finished, the recorded and stored student identity information is counted and analyzed, attendance scores and comments of students are obtained according to attendance rules, and automatic attendance of the students in remote teaching is realized;
after learning is finished, the recorded and stored learning behaviors of the student are counted and analyzed, and according to the learning quality evaluation rule, the learning quality score and comment of the student are obtained, so that the automatic evaluation of the learning quality of the remote teaching student is realized;
in the teaching process, abnormal gathering information of personnel is detected, if the abnormal gathering of the personnel occurs, warning information is output to a supervision workstation, and an operator on duty is informed to come for processing, so that the teaching order maintenance of remote teaching is realized.
2. The machine vision-based remote video teaching quality assessment method according to claim 1, wherein the step of matching the periodically collected face images with a preset face sample library after 3D transformation to obtain student identity information, recording and storing comprises:
periodically acquiring images of students in a classroom by scanning through a dome camera, and extracting face images in the images;
extracting face parameters in the face image, and corresponding the face parameters to a preset face 3D parameter model to obtain a reshaped current face 3D model;
and extracting a face front image in the current face 3D model, and matching the face front image with a preset face sample library to obtain student identity information.
3. The machine-vision-based remote video teaching quality assessment method according to claim 2, wherein before extracting face parameters from a face image, corresponding the face parameters to a preset face 3D parameter model, and obtaining a reshaped current face 3D model, the method comprises:
acquiring a background image of each preset position of the dome camera when no student exists;
based on a difference method, carrying out difference on the face image and the background image to obtain a difference image;
judging whether the image after the difference is larger than a preset threshold value or not;
if not, the current preset position is empty, and no student exists;
if so, sequentially carrying out binarization and smoothing filtering on the image after the difference to obtain a face area of the image after the difference, and extracting parameters in the face area to obtain face parameters.
4. The machine-vision-based remote video teaching quality assessment method according to claim 2, before acquiring the face image, comprising:
establishing a student sample library and a blacklist personnel sample library;
acquiring image parameters which are coded in advance, wherein the ball machine is arranged in a classroom, the image parameters comprise a classroom number, a ball machine number and a preset position number, and the preset position is a position detected by the ball machine;
and acquiring the video stream uploaded by the dome camera, and decoding the video stream to obtain a face image.
5. The machine vision-based remote video teaching quality assessment method according to claim 1, wherein the step of transmitting the periodically collected trainee learning state images to the pre-trained deep neural network model to obtain, record and store the learning behavior of each trainee comprises the following steps:
periodically acquiring images of students in a classroom by scanning through a ball machine, and extracting the students in the middle area of the images as learning state images;
whitening the learning state image, and inputting the whitened learning state image into a pre-trained deep neural network model, wherein the neural network model comprises a DBN (digital base network) and a softmax classifier;
the DBN carries out image processing on the learning state image to obtain characteristic parameters of the learning behaviors of the student;
classifying the characteristic parameters by using a softmax classifier to obtain the learning behavior of each student, wherein the learning behavior of each student comprises the following steps: front-looking, head-down reading, head-down writing, empty seat, making a call, playing a mobile phone, turning back, left-side looking, right-side looking and sleeping.
6. The machine-vision-based remote video teaching quality assessment method according to claim 5, before acquiring the learning state image, comprising:
establishing a learning behavior classification of the student, comprising: front-looking, head-down reading, head-down writing, empty seat, making a call, playing a mobile phone, turning back, left-side looking, right-side looking and sleeping;
acquiring image parameters which are coded in advance, wherein the ball machine is arranged in a classroom, the image parameters comprise a classroom number, a ball machine number and a preset position number, and the preset position is a position detected by the ball machine;
acquiring a video stream uploaded by a dome camera, and decoding the video stream to obtain a decoded image;
and cutting the learning image of the student in the decoded image to obtain the learning state image of the student.
7. The machine-vision-based remote video teaching quality assessment method according to claim 5, wherein the training process of said deep neural network model comprises:
establishing a student behavior classification training picture sample set and a verification sample set;
whitening the pictures in the picture sample set to obtain a whitened picture sample set, and dividing the whitened picture sample set into a training sample set and a verification sample set;
inputting the training sample set into the constructed deep neural network model, and acquiring weights and parameters corresponding to the DBN and softmax classifiers respectively to obtain the trained deep neural network model;
inputting the verification sample set into the trained deep neural network model to obtain the student learning behaviors corresponding to the verification sample set, and evaluating the accuracy of judging the student learning behaviors;
if the accuracy of the student learning behavior judgment is within a preset range, completing deep neural network model training;
and if the accuracy of the student's learning behavior judgment is not within the preset range, continuing to perform optimization training on the trained deep neural network model through the training sample set until the accuracy of the student's learning behavior judgment is within the preset range, and finishing the deep neural network model training.
8. The method for assessing the quality of remote video teaching according to claim 1, wherein the method comprises the steps of acquiring classroom scene images in a classroom in real time by using a gun camera, collecting the classroom scene images by using the gun camera, and performing image processing to obtain abnormal gathering information of people in hallways and open places, and specifically comprises the steps of:
processing the classroom scene image based on a histogram enhanced defogging algorithm to obtain a processed classroom scene image;
extracting all personnel images in the processed images of the classroom aisle and the open scene based on a difference method;
processing personnel images extracted from a classroom aisle and a scene in an open place by adopting threshold processing, morphological operation and image fusion to obtain a personnel image without a hole;
extracting all personnel images in the personnel images without holes according to the contour characteristics of the head and the shoulders of the personnel, and determining the positions of all the personnel images;
calculating the distance between adjacent personnel in all personnel images according to the transmission relation of the images acquired by the gunlock;
and if the distance between adjacent people is smaller than a set value, determining that people are gathered.
9. The method for machine vision-based remote video teaching quality assessment according to claim 1, wherein before periodically acquiring the face image and the learning state image of each student and the classroom scene image by ball machine scanning, and periodically acquiring the classroom scene image in the classroom by gun machine scanning, comprising:
determining a monitoring area of each ball machine;
setting a preset position of a ball machine and a scanning path of the ball machine;
the front face image of the whole student is recorded into a face recognition server database, and the face recognition characteristics are extracted to serve as a preset face sample library.
10. A remote video teaching quality assessment system based on machine vision is characterized by comprising an acquisition module, an identity matching module, a learning behavior identification module, an abnormal aggregation detection module, an automatic attendance checking module, a learning quality assessment module and an abnormal information prompt module;
the acquisition module is used for periodically acquiring the face image, the learning state image and the classroom scene image of each student through the scanning of the dome camera, and acquiring the classroom scene image in a classroom in real time through the gun camera;
the identity matching module is used for matching the periodically acquired face images with a preset face sample library after 3D conversion to obtain student identity information, and recording and storing the student identity information;
the learning behavior recognition module is used for transmitting the periodically acquired student learning state images to a pre-trained deep neural network model to obtain the learning behavior of each student, and recording and storing the learning behavior;
the abnormal gathering detection module is used for carrying out real-time image processing on the classroom scene image to obtain abnormal gathering information of personnel;
the automatic attendance module is used for counting and analyzing the recorded and stored student identity information after learning is finished, and obtaining attendance scores and comments of each student according to attendance rules to realize automatic attendance of students in remote teaching;
the learning quality evaluation module is used for counting and analyzing the recorded and stored learning behaviors of the students after learning is finished, obtaining the learning quality score and comment of each student according to the learning quality evaluation rule, and realizing the automatic evaluation of the learning quality of the remote teaching students;
the abnormal information prompting module is used for detecting abnormal gathering information of personnel in the teaching process, outputting alarm information to the supervision workstation if the abnormal gathering of the personnel occurs, informing an operator on duty to carry out treatment in the future and realizing the teaching order maintenance of remote teaching.
CN202011138213.2A 2020-10-22 2020-10-22 Remote video teaching quality evaluation method and system based on machine vision Pending CN112257591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011138213.2A CN112257591A (en) 2020-10-22 2020-10-22 Remote video teaching quality evaluation method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011138213.2A CN112257591A (en) 2020-10-22 2020-10-22 Remote video teaching quality evaluation method and system based on machine vision

Publications (1)

Publication Number Publication Date
CN112257591A true CN112257591A (en) 2021-01-22

Family

ID=74264620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011138213.2A Pending CN112257591A (en) 2020-10-22 2020-10-22 Remote video teaching quality evaluation method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN112257591A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102308443B1 (en) * 2021-02-19 2021-10-05 유비트론(주) Smart advanced lecture and recoding system
CN113762107A (en) * 2021-08-23 2021-12-07 海宁奕斯伟集成电路设计有限公司 Object state evaluation method and device, electronic equipment and readable storage medium
CN115273863A (en) * 2022-06-13 2022-11-01 广东职业技术学院 Compound network class attendance system and method based on voice recognition and face recognition
CN116091272A (en) * 2023-04-13 2023-05-09 内江市感官密码科技有限公司 Campus abnormal activity monitoring method, device, equipment and medium
CN116363575A (en) * 2023-02-15 2023-06-30 南京诚勤教育科技有限公司 Classroom monitoring management system based on wisdom campus
CN116543350A (en) * 2023-05-24 2023-08-04 三余堂文化艺术泰州有限公司 Remote education entity classroom target distribution uniformity measuring system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015000414A1 (en) * 2013-07-03 2015-01-08 Sun Xiquan Performance evaluation system and method for laboratory learning
CN205179236U (en) * 2015-11-13 2016-04-20 湖南财政经济学院 System for gathering of control campus crowd
CN107316261A (en) * 2017-07-10 2017-11-03 湖北科技学院 A kind of Evaluation System for Teaching Quality based on human face analysis
CN108765611A (en) * 2018-05-21 2018-11-06 中兴智能视觉大数据技术(湖北)有限公司 A kind of dynamic human face identification Work attendance management system and its management method
CN109886850A (en) * 2019-02-21 2019-06-14 无锡加视诚智能科技有限公司 A kind of data comprehensive management system and management method for smart classroom
CN111639565A (en) * 2020-05-19 2020-09-08 重庆大学 Audio and video combined classroom quality comprehensive evaluation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015000414A1 (en) * 2013-07-03 2015-01-08 Sun Xiquan Performance evaluation system and method for laboratory learning
CN205179236U (en) * 2015-11-13 2016-04-20 湖南财政经济学院 System for gathering of control campus crowd
CN107316261A (en) * 2017-07-10 2017-11-03 湖北科技学院 A kind of Evaluation System for Teaching Quality based on human face analysis
CN108765611A (en) * 2018-05-21 2018-11-06 中兴智能视觉大数据技术(湖北)有限公司 A kind of dynamic human face identification Work attendance management system and its management method
CN109886850A (en) * 2019-02-21 2019-06-14 无锡加视诚智能科技有限公司 A kind of data comprehensive management system and management method for smart classroom
CN111639565A (en) * 2020-05-19 2020-09-08 重庆大学 Audio and video combined classroom quality comprehensive evaluation method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102308443B1 (en) * 2021-02-19 2021-10-05 유비트론(주) Smart advanced lecture and recoding system
CN113762107A (en) * 2021-08-23 2021-12-07 海宁奕斯伟集成电路设计有限公司 Object state evaluation method and device, electronic equipment and readable storage medium
CN113762107B (en) * 2021-08-23 2024-05-07 海宁奕斯伟集成电路设计有限公司 Object state evaluation method, device, electronic equipment and readable storage medium
CN115273863A (en) * 2022-06-13 2022-11-01 广东职业技术学院 Compound network class attendance system and method based on voice recognition and face recognition
CN116363575A (en) * 2023-02-15 2023-06-30 南京诚勤教育科技有限公司 Classroom monitoring management system based on wisdom campus
CN116363575B (en) * 2023-02-15 2023-11-03 南京诚勤教育科技有限公司 Classroom monitoring management system based on wisdom campus
CN116091272A (en) * 2023-04-13 2023-05-09 内江市感官密码科技有限公司 Campus abnormal activity monitoring method, device, equipment and medium
CN116091272B (en) * 2023-04-13 2023-06-20 内江市感官密码科技有限公司 Campus abnormal activity monitoring method, device, equipment and medium
CN116543350A (en) * 2023-05-24 2023-08-04 三余堂文化艺术泰州有限公司 Remote education entity classroom target distribution uniformity measuring system
CN116543350B (en) * 2023-05-24 2024-04-02 菲尔莱(北京)科技有限公司 Remote education entity classroom target distribution uniformity measuring system

Similar Documents

Publication Publication Date Title
CN112257591A (en) Remote video teaching quality evaluation method and system based on machine vision
Lim et al. Automated classroom monitoring with connected visioning system
Carlson et al. Lineup composition, suspect position, and the sequential lineup advantage.
CN102013176B (en) On-line study system
CN111079113A (en) Teaching system with artificial intelligent control and use method thereof
CN110647842B (en) Double-camera classroom inspection method and system
US8861779B2 (en) Methods for electronically analysing a dialogue and corresponding systems
CN105791299A (en) Unattended monitoring type intelligent on-line examination system
Hu et al. Research on abnormal behavior detection of online examination based on image information
WO2021077382A1 (en) Method and apparatus for determining learning state, and intelligent robot
CN110889672A (en) Student card punching and class taking state detection system based on deep learning
CN111523444B (en) Classroom behavior detection method based on improved Openpost model and facial micro-expression
CN111291613B (en) Classroom performance evaluation method and system
CN114463828B (en) Invigilation method and system based on testimony unification, electronic equipment and storage medium
CN107133611A (en) A kind of classroom student nod rate identification with statistical method and device
CN113052127A (en) Behavior detection method, behavior detection system, computer equipment and machine readable medium
CN111523445A (en) Examination behavior detection method based on improved Openpos model and facial micro-expression
CN116132637A (en) Online examination monitoring system and method, electronic equipment and storage medium
CN113706348A (en) Online automatic invigilation system and method based on face recognition
RU2005100267A (en) METHOD AND SYSTEM OF AUTOMATIC VERIFICATION OF THE PRESENCE OF A LIVING FACE OF A HUMAN IN BIOMETRIC SECURITY SYSTEMS
CN111860308A (en) Intelligent student sleep safety management method based on video behavior recognition
CN117218680A (en) Scenic spot abnormity monitoring data confirmation method and system
CN113688739A (en) Classroom learning efficiency prediction method and system based on emotion recognition and visual analysis
CN113542668A (en) Monitoring system and method based on 3D camera
Dargham et al. Estimating the number of cameras required for a given classroom for face-based smart attendance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination