CN110334610B - Multi-dimensional classroom quantification system and method based on computer vision - Google Patents
Multi-dimensional classroom quantification system and method based on computer vision Download PDFInfo
- Publication number
- CN110334610B CN110334610B CN201910516795.4A CN201910516795A CN110334610B CN 110334610 B CN110334610 B CN 110334610B CN 201910516795 A CN201910516795 A CN 201910516795A CN 110334610 B CN110334610 B CN 110334610B
- Authority
- CN
- China
- Prior art keywords
- class
- classroom
- video
- data
- students
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000011002 quantification Methods 0.000 title claims abstract description 12
- 230000006399 behavior Effects 0.000 claims abstract description 55
- 238000011156 evaluation Methods 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 29
- 230000000007 visual effect Effects 0.000 claims abstract description 21
- 238000013500 data storage Methods 0.000 claims abstract description 17
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 206010000117 Abnormal behaviour Diseases 0.000 claims abstract description 10
- 230000000694 effects Effects 0.000 claims abstract description 10
- 230000014759 maintenance of location Effects 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims abstract description 8
- 230000008451 emotion Effects 0.000 claims description 23
- 230000008859 change Effects 0.000 claims description 21
- 238000004458 analytical method Methods 0.000 claims description 13
- 238000013135 deep learning Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000005484 gravity Effects 0.000 claims description 6
- 230000002159 abnormal effect Effects 0.000 claims description 5
- 230000009471 action Effects 0.000 claims description 5
- 230000036544 posture Effects 0.000 claims description 4
- 238000012800 visualization Methods 0.000 claims description 4
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 238000013079 data visualisation Methods 0.000 claims description 3
- 241001310793 Podium Species 0.000 claims description 2
- 238000013145 classification model Methods 0.000 claims description 2
- 238000005192 partition Methods 0.000 claims 1
- 230000001737 promoting effect Effects 0.000 abstract 1
- 238000013473 artificial intelligence Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Educational Administration (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Economics (AREA)
- Tourism & Hospitality (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Educational Technology (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Electrically Operated Instructional Devices (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Primary Health Care (AREA)
- Psychiatry (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Computational Linguistics (AREA)
Abstract
The invention discloses a multidimensional classroom quantification system and a method based on computer vision, wherein the system comprises the following steps: the data acquisition module is used for acquiring the video of the front face of the student in class and the video of the teacher teaching in real time; the data real-time processing module is used for realizing the detection and identification of teaching behaviors of teachers and the detection and identification of learning behaviors of students in class; the data visual display module is used for quantifying the classroom from four angles and visually displaying the classroom, so that a novel evaluation system of two-way dimensions of a teacher and a student is constructed to quantify the classroom; the data storage module comprises three parts, namely original video data retention of the whole class, abnormal behavior frame cutting retention of students and class evaluation report generation, and data is stored from the three parts of the original video and the processing result in the processing process. The invention can automatically and intelligently evaluate the class, is convenient for teachers to refer to the class teaching process after class, and feeds back the teaching quality according to quantized student behavior results, thereby pertinently improving the teaching effect and promoting the learning progress of students.
Description
Technical Field
The invention relates to the field of education artificial intelligence and the field of teaching evaluation systems, in particular to a multidimensional classroom quantification system and method based on computer vision.
Background
In recent years, with the development of artificial intelligence and the strong support of China to artificial intelligence, china has become one of the most active and most extensive countries for applying artificial intelligence technology. The old and traditional industry of education also changes silently, the Internet is reconstructing learning, and artificial intelligence is integrated with education. Through artificial intelligence enabling education scene analysis, the computability, the understandability and the evaluable property of the education scene are solved, so that accurate and efficient classroom behavior analysis, a sustainable development classroom evaluation system and standardized education big data acquisition and convergence are realized. For a long time, the traditional teaching evaluation body mainly comprises questionnaires, teaching management staff for arranging observation along with a hall and observation in a class after the class, and an integrated teaching evaluation system based on an artificial intelligence technology under the drive of data is not provided at present.
Disclosure of Invention
Aiming at the defects that in the prior art, quantification and assessment of a classroom generally take time and effort to observe the classroom by manual labor, lack of data support and high subjectivity, the invention provides a multidimensional classroom quantification system and method based on computer vision, which utilize the technology of artificial intelligence and education big data to automatically quantify and evaluate the classroom, thereby reducing the teaching and assessment workload and improving the teaching and assessment effectiveness.
The technical scheme adopted for solving the technical problems is as follows:
the invention provides a multidimensional classroom quantification system based on computer vision, which comprises: the system comprises a data acquisition module, a data real-time processing module, a data visual display module and a data storage module; wherein:
the data acquisition module is used for acquiring the video of the front face of a student in a class and the video of the teaching of a teacher in real time and sending the video to the data real-time processing module;
the data real-time processing module is used for realizing the detection and identification of teaching behaviors of teachers and the detection and identification of learning behaviors of students in class, and transmitting the detected data into the data visual display module and the data storage module;
the data visual display module comprises a novel class evaluation index system and is used for quantifying class from four angles and visually displaying, wherein the novel class evaluation index system comprises student concentration, class emotion change level, student class area liveness and teacher teaching type, so that a novel evaluation system of bidirectional dimensions of a teacher and a student is constructed to quantify class;
the data storage module comprises three parts, namely original video data retention of whole class, abnormal student behavior frame cutting retention and class evaluation report generation, wherein the three parts store data from an original video to a processing result in a processing process, and are used for a teacher to refer to a class teaching process after class, and the teaching quality is fed back according to a quantized student behavior result, so that the teaching effect is improved pertinently, and the student learning progress is promoted.
Furthermore, the data acquisition module acquires two paths of videos of the front video of the students in class and the teaching video of the teacher by adopting the front camera and the rear camera of the classroom, and then sends the videos into the data processing module.
Further, the data processing module of the invention is used for realizing two paths of video detection and identification of students and teachers, wherein:
in the first path of video, utilizing an opencv tool to intercept frames of a video stream of the front side of a student in class, and then sending the frames into a deep learning network, wherein the deep learning network adopts a target detection method based on YOLOV3 and outputs the current learning behavior and the position of the student;
meanwhile, in the second path of video, the segment is intercepted by adopting an opencv tool, and the segment is sent to a depth RNN network for classification and identification of the teacher teaching mode.
Further, the teaching behaviors of the teacher and the learning behaviors of the students identified in the data real-time processing module comprise:
the student learning behavior includes: raising head, reading, taking notes, taking pictures, playing mobile phones, sleeping, speaking, adjusting head postures and arranging clothes;
the teacher teaching behavior comprises: the method comprises ten actions of operating PPT while explaining, standing on a podium while explaining, sitting while explaining, standing on students while explaining, walking while explaining, only playing multimedia video, writing on blackboard, only operating multimedia, teacher asking questions and others.
Further, the novel class evaluation index system in the data visualization display module comprises a student concentration degree, a class emotion change level, a student class area liveness degree and a teacher teaching type; wherein:
student concentration includes 5 secondary indicators: head raising rate, hand watching rate, reading and writing rate and sleeping rate, and the calculation formula is as follows:
wherein m represents the number of times of the action of all students in the class at the current moment, and n represents the sum of the number of times of the action of all students in the class at the current moment;
the emotion level in the classroom is divided into three states of high, stable and sinking, and the states of head raising, reading, note taking and photographing are all high; speaking, head posture adjustment and clothes arrangement are in a stable state, and mobile phone playing and sleeping are in a sinking state; the class emotion level calculation mode is that the state with the largest specific gravity among three class emotion level states within 1 minute is the class emotion level within 1 minute at present;
the calculation mode of the activity level of the classroom area is to divide the detected student behavior state into four areas in the upper, lower, left and right directions according to the overall distribution of the seats, namely the student seat area faces the blackboard, the level of the emotion in the classroom is compared, and the activity level of the area is represented by the color depth of a square chart.
Further, the data storage module comprises an original video storage sub-module, an abnormal behavior reservation sub-module and a classroom evaluation report sub-module; wherein:
the original video storage sub-module is used for storing the front video of the students on the lessons and the teaching video of the teacher;
the abnormal behavior reservation sub-module is used for sleeping and playing abnormal behavior labels of the mobile phone output by the deep learning network and then performing frame cutting behavior area reservation by using an opencv tool box;
and the classroom evaluation reporting submodule is used for generating a classroom behavior specific gravity distribution cake map, a classroom emotion horizontal time duty distribution specific gravity map, a distribution map of the regional liveness of the whole classroom class and a teacher teaching style distribution map of the whole classroom class.
Further, the system of the invention runs on a PC operating system containing a GPU graphics card.
The invention provides a multidimensional classroom quantification method based on computer vision, which comprises the following steps:
step 1: video stream data acquisition;
two paths of video data are acquired through cameras distributed on the blackboard towards students and cameras distributed on the rear part of a classroom towards teachers, and the acquired video data are input into a video processing module;
step 2: video data processing;
two paths of analysis are carried out on the video input in the step 1, wherein the two paths of analysis comprise two paths of video in total of the front video of the students on lessons and the teaching video of teachers; the method comprises the steps that real-time video capturing is carried out on student videos by utilizing opencv, current image frames are input to a trained target detection model YOLOV3, 9 classes of learning behaviors of students are detected, namely, the positions of the students and the learning behavior state of the students at the current moment are output through a network; intercepting a lesson video frame of a teacher video by utilizing an opencv tool, inputting the lesson video frame into a trained depth RNN video classification model to carry out classification recognition of the lesson style of the teacher, and transmitting label information output by the teacher and the students in two dimensions into a data visual display module and a data storage module;
step 3: data visualization display;
and (3) visually displaying the student position, the behavior state information and the teacher teaching style information input in the step (2), wherein the visual display comprises 5 sub-modules: the system comprises a classroom behavior main interface, a visualization area, a dynamic statistics index area, a frame capture area and a classroom image area, wherein the main interface area is used for inputting video data to display the current learning state of students and the teaching style of teachers in real time after a large amount of data are collected and trained; the visual area displays the behaviors of students in the classroom through the overall evaluation index of the concentration degree in the classroom, the emotion change level in the classroom, the activity degree in the classroom area and the change condition of the teaching style of the teacher after the behaviors of the students in the classroom are quantitatively analyzed; the dynamic statistics index area is used for counting indexes of head-up rate, head-down rate and number of people looking at mobile phones of students in a class; the frame cutting area is used for keeping the abnormal student status in a screenshot mode so as to facilitate subsequent analysis; generating a classroom image and macroscopic integral classroom evaluation analysis report in the classroom image area;
step 4: storing data;
firstly, original video data of a classroom is reserved, wherein the original video data comprises two paths of videos of students and teachers for subsequent viewing; meanwhile, according to the label and the position information of the students output in the step 2, intercepting abnormal behaviors of playing mobile phones and sleeping of the students with 16 x 16 pixel values reserved; and finally generating a classroom evaluation analysis report comprising a classroom behavior distribution cake map and a table of the whole class, a class area liveness change heat map, a class situation change level fluctuation line map and a teacher teaching style proportion map.
The invention has the beneficial effects that: the computer vision-based multidimensional classroom quantification system and method provided by the invention are based on data driving, and utilize artificial intelligence and big data analysis technology to automatically quantify and evaluate the classroom, so that the classroom evaluation is relatively objective, has data interpretation ability, reduces the subjectivity of the evaluation in the traditional subjective questionnaire, and reduces a great deal of manual and mechanical labor for the classroom evaluation.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic overall structure of an embodiment of the present invention;
FIG. 2 is a block diagram of a visual display module according to an embodiment of the present invention;
FIG. 3 is a block diagram of a data storage module according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, in the computer vision-based multidimensional classroom quantization system according to the embodiment of the present invention, a computer vision technology in deep learning is adopted in a computer vision-based classroom quantization method to quantize classroom scenes from two dimensions of a teacher and a student from a graph perspective, and to quantize classroom scenes based on data and to scientifically evaluate the classroom based on the data. The system comprises: the system comprises a data acquisition module, a data real-time processing module, a data visual display module and a data storage module; wherein:
the data acquisition module is used for acquiring the video of the front face of a student in a class and the video of the teaching of a teacher in real time and sending the video to the data real-time processing module;
the data real-time processing module is used for realizing the detection and identification of teaching behaviors of teachers and the detection and identification of learning behaviors of students in class, and transmitting the detected data into the data visual display module and the data storage module;
the data visual display module comprises a novel class evaluation index system and is used for quantifying class from four angles and visually displaying, wherein the novel class evaluation index system comprises student concentration, class emotion change level, student class area liveness and teacher teaching type, so that a novel evaluation system of bidirectional dimensions of a teacher and a student is constructed to quantify class;
the data storage module comprises three parts, namely original video data retention of whole class, abnormal student behavior frame cutting retention and class evaluation report generation, wherein the three parts store data from an original video to a processing result in a processing process, and are used for a teacher to refer to a class teaching process after class, and the teaching quality is fed back according to a quantized student behavior result, so that the teaching effect is improved pertinently, and the student learning progress is promoted.
The classroom quantification method based on computer vision provided by the embodiment of the invention comprises the following steps:
step 1: video data acquisition;
2 paths of Aowei high-definition video acquisition equipment are erected in a classroom, and are respectively arranged on the right side of a blackboard and face towards students and the rear side of the classroom, so that the front video of the students and the teaching video of the teachers are acquired;
step 2: a video data processing module;
inputting the video data acquired in the step 1 into different deep learning models respectively in two ways, specifically detecting learning behaviors of each student in a classroom in real time at 24fps through a YOLOV3 (YOLOV 3: an Incremental Improvement) network for the front video of the student, wherein the specific student behaviors of 9 behaviors such as mobile phone playing, note taking, reading and the like are shown in a table 1, and selecting a position frame of the student; for the detection of the teaching style of a teacher, the video key frames of the video of the teacher, which are input into the network, are intercepted by using an opencv tool according to the time interval of 1s, and are input into a depth RNN (Deep RNN Framework for Visual Sequential Applications) network for classification detection of the teaching style, outputting of blackboard writing, multimedia operation, question and standing on the side of a lecture platform for explanation, and the specific teaching style of 9 teaching style types are shown in table 1.
Table 1 behaviour class table for students and teachers
Step 3: a visual display module;
and (3) visually displaying the student behavior position and class label output by the input step (2) and the recognition result of the teacher teaching style, wherein the visual display comprises 5 sub-modules: the system comprises a main interface, a dynamic index area, a frame cutting area, a visualization area and a classroom report area, wherein the visualization area further comprises a classroom emotion level change line graph, a classroom area activity hotspot graph, a classroom concentration change graph and a teacher teaching behavior change graph, and the classroom and the assessment classroom are quantified from two angles of a teacher. As shown in particular in FIG. 2
Step 4: a data storage module;
the data storage module is divided into three sub-modules, and original video data of a classroom is reserved firstly, wherein the original video data comprises two paths of videos of a student front side and a teacher for subsequent viewing; meanwhile, according to the label and the position information of the students output in the step 2, intercepting abnormal behaviors of playing mobile phones and sleeping of the students with 16 x 16 pixel values reserved; and finally generating a classroom evaluation analysis report comprising a classroom behavior distribution map of the whole class, a classroom area liveness change heat map, a classroom emotion change wave map and a teacher teaching style proportion map. The class behavior distribution map is used for accumulating and drawing a table of the occurrence times of students in the whole class and drawing a proportional cake map according to the proportion, the class area liveness change thermal map is used for counting the overall class area liveness condition of the whole class, the class emotion change fluctuation map is used for drawing a class emotion change fluctuation line map of the students in the whole class according to a time scale of 1 minute, and the teacher teaching style proportion map is used for accumulating and drawing the table of the occurrence times of the teacher teaching style and drawing the proportional cake map according to the proportion. The overall frame is shown in particular in fig. 3.
It will be understood that modifications and variations will be apparent to those skilled in the art from the foregoing description, and it is intended that all such modifications and variations be included within the scope of the following claims.
Claims (1)
1. The multi-dimensional classroom quantification method based on computer vision is characterized by being realized by a multi-dimensional classroom quantification system based on computer vision, and the system comprises: the system comprises a data acquisition module, a data real-time processing module, a data visual display module and a data storage module; wherein:
the data acquisition module is used for acquiring the video of the front face of a student in a class and the video of the teaching of a teacher in real time and sending the video to the data real-time processing module;
the data real-time processing module is used for realizing the detection and identification of teaching behaviors of teachers and the detection and identification of learning behaviors of students in class, and transmitting the detected data into the data visual display module and the data storage module;
the data real-time processing module is used for realizing two paths of video detection and identification of students and teachers, wherein:
in the first path of video, utilizing an opencv tool to intercept frames of a video stream of the front side of a student in class, and then sending the frames into a deep learning network, wherein the deep learning network adopts a target detection method based on YOLOV3 and outputs the current learning behavior and the position of the student;
meanwhile, in the second path of video, the segment is intercepted by adopting an opencv tool, and is sent to a depth RNN network for classification and identification of a teacher teaching mode;
the data visual display module comprises a novel class evaluation index system and is used for quantifying class from four angles and visually displaying, wherein the novel class evaluation index system comprises student concentration, class emotion change level, student class area liveness and teacher teaching type, so that a novel evaluation system of bidirectional dimensions of a teacher and a student is constructed to quantify class;
the data storage module comprises three parts, namely original video data retention of the whole class, abnormal student behavior frame cutting retention and class evaluation report generation, wherein the three parts are used for storing data from the original video to the processing result in the processing process and for referring to the class teaching process after the teacher class, and the teaching quality is fed back according to the quantized student behavior result so as to pertinently improve the teaching effect and promote the student learning progress;
the data storage module comprises three sub-modules, namely an original video storage sub-module, an abnormal behavior reservation sub-module and a classroom evaluation report sub-module; wherein:
the original video storage sub-module is used for storing the front video of the students on the lessons and the teaching video of the teacher;
the abnormal behavior reservation sub-module is used for sleeping and playing abnormal behavior labels of the mobile phone output by the deep learning network and then performing frame cutting behavior area reservation by using an opencv tool box;
the classroom evaluation reporting submodule is used for generating a classroom behavior specific gravity distribution cake map, a classroom emotion horizontal time duty distribution specific gravity map, a distribution map of the activity of a classroom class area and a teacher teaching style distribution map of a whole classroom class;
the data acquisition module acquires two paths of videos of the front video of the students on the lessons and the teaching video of the teachers by adopting two cameras in front of and behind the classroom, and then sends the videos to the data processing module;
the teacher teaching behavior and the student teaching learning behavior identified in the data real-time processing module comprise:
the student learning behavior includes: raising head, reading, taking notes, taking pictures, playing mobile phones, sleeping, speaking, adjusting head postures and arranging clothes;
the teacher teaching behavior comprises: operating PPT while explaining, standing on a podium while explaining, sitting while explaining, standing in students while explaining, walking while explaining, playing only multimedia video, writing on blackboard, operating only multimedia, teacher asking questions, and other ten behaviors;
the novel class evaluation index system in the data visual display module comprises student concentration degree, class emotion change level, student class area liveness and teacher teaching type; wherein:
student concentration includes 5 secondary indicators: head raising rate, hand watching rate, reading and writing rate and sleeping rate, and the calculation formula is as follows:
wherein m represents the number of times of the action of all students in the class at the current moment, and n represents the sum of the number of times of the action of all students in the class at the current moment;
the emotion level in the classroom is divided into three states of high, stable and sinking, and the states of head raising, reading, note taking and photographing are all high; speaking, head posture adjustment and clothes arrangement are in a stable state, and mobile phone playing and sleeping are in a sinking state; the class emotion level calculation mode is that the state with the largest specific gravity among three class emotion level states within 1 minute is the class emotion level within 1 minute at present;
the classroom area liveness calculation mode is to carry out cross partition on the detected student behavior state according to the overall distribution of seats, namely, the student seat area is divided into four areas which face a blackboard, namely, an upper area, a lower area, a left area and a right area, the class emotion level is compared, and the liveness of the areas is represented by the depth of the color of a square chart;
the system runs on a PC operating system containing a GPU display card;
the method comprises the following steps:
step 1: video stream data acquisition;
two paths of video data are acquired through cameras distributed on the blackboard towards students and cameras distributed on the rear part of a classroom towards teachers, and the acquired video data are input into a video processing module;
step 2: video data processing;
two paths of analysis are carried out on the video input in the step 1, wherein the two paths of analysis comprise two paths of video in total of the front video of the students on lessons and the teaching video of teachers; the method comprises the steps that real-time video capturing is carried out on student videos by utilizing opencv, current image frames are input to a trained target detection model YOLOV3, 9 classes of learning behaviors of students are detected, namely, the positions of the students and the learning behavior state of the students at the current moment are output through a network; intercepting a lesson video frame of a teacher video by utilizing an opencv tool, inputting the lesson video frame into a trained depth RNN video classification model to carry out classification recognition of the lesson style of the teacher, and transmitting label information output by the teacher and the students in two dimensions into a data visual display module and a data storage module;
step 3: data visualization display;
and (3) visually displaying the student position, the behavior state information and the teacher teaching style information input in the step (2), wherein the visual display comprises 5 sub-modules: the system comprises a classroom behavior main interface, a visualization area, a dynamic statistics index area, a frame capture area and a classroom image area, wherein the main interface area is used for inputting video data to display the current learning state of students and the teaching style of teachers in real time after a large amount of data are collected and trained; the visual area displays the behaviors of students in the classroom through the overall evaluation index of the concentration degree in the classroom, the emotion change level in the classroom, the activity degree in the classroom area and the change condition of the teaching style of the teacher after the behaviors of the students in the classroom are quantitatively analyzed; the dynamic statistics index area is used for counting indexes of head-up rate, head-down rate and number of people looking at mobile phones of students in a class; the frame cutting area is used for keeping the abnormal student status in a screenshot mode so as to facilitate subsequent analysis; generating a classroom image and macroscopic integral classroom evaluation analysis report in the classroom image area;
step 4: storing data;
firstly, original video data of a classroom is reserved, wherein the original video data comprises two paths of videos of students and teachers for subsequent viewing; meanwhile, according to the label and the position information of the students output in the step 2, intercepting abnormal behaviors of playing mobile phones and sleeping of the students with 16 x 16 pixel values reserved; and finally generating a classroom evaluation analysis report comprising a classroom behavior distribution cake map and a table of the whole class, a class area liveness change heat map, a class situation change level fluctuation line map and a teacher teaching style proportion map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910516795.4A CN110334610B (en) | 2019-06-14 | 2019-06-14 | Multi-dimensional classroom quantification system and method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910516795.4A CN110334610B (en) | 2019-06-14 | 2019-06-14 | Multi-dimensional classroom quantification system and method based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110334610A CN110334610A (en) | 2019-10-15 |
CN110334610B true CN110334610B (en) | 2024-01-26 |
Family
ID=68141123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910516795.4A Active CN110334610B (en) | 2019-06-14 | 2019-06-14 | Multi-dimensional classroom quantification system and method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110334610B (en) |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110708392B (en) * | 2019-10-17 | 2022-05-10 | 重庆工商职业学院 | Student management system and method based on cloud platform |
CN110837795A (en) * | 2019-11-04 | 2020-02-25 | 防灾科技学院 | Teaching condition intelligent monitoring method, device and equipment based on classroom monitoring video |
CN110782185A (en) * | 2019-11-09 | 2020-02-11 | 上海光数信息科技有限公司 | Classroom behavior recognition and analysis method |
CN110853428A (en) * | 2019-12-04 | 2020-02-28 | 广州云蝶科技有限公司 | Recording and broadcasting control method and system based on Internet of things |
CN111027865B (en) * | 2019-12-12 | 2024-04-02 | 山东大学 | Teaching analysis and quality assessment system and method based on behavior and expression recognition |
CN111178273A (en) * | 2019-12-30 | 2020-05-19 | 云知声智能科技股份有限公司 | Education method and device based on emotion change |
CN111275592B (en) * | 2020-01-16 | 2023-04-18 | 浙江工业大学 | Classroom behavior analysis method based on video images |
CN113515976B (en) * | 2020-04-10 | 2023-08-29 | 华中科技大学 | Intelligent experiment investigation system based on multi-scene video analysis |
CN111553599A (en) * | 2020-04-29 | 2020-08-18 | 江苏加信智慧大数据研究院有限公司 | Course evaluation system and method |
CN111291840A (en) * | 2020-05-12 | 2020-06-16 | 成都派沃智通科技有限公司 | Student classroom behavior recognition system, method, medium and terminal device |
CN111626252B (en) * | 2020-06-02 | 2023-04-07 | 北京中广上洋科技股份有限公司 | Teaching video analysis method and device |
CN112001944A (en) * | 2020-07-09 | 2020-11-27 | 浙江大华技术股份有限公司 | Classroom teaching quality evaluation data acquisition method, computer equipment and medium |
CN111915148B (en) * | 2020-07-10 | 2023-11-03 | 北京科技大学 | Classroom teaching evaluation method and system based on information technology |
CN112270231A (en) * | 2020-10-19 | 2021-01-26 | 北京大米科技有限公司 | Method for determining target video attribute characteristics, storage medium and electronic equipment |
CN112861730A (en) * | 2021-02-09 | 2021-05-28 | 北京文香信息技术有限公司 | Feedback method and device of classroom behavior, electronic equipment and storage medium |
CN112990105B (en) * | 2021-04-19 | 2021-09-21 | 北京优幕科技有限责任公司 | Method and device for evaluating user, electronic equipment and storage medium |
CN113239915B (en) * | 2021-07-13 | 2021-11-30 | 北京邮电大学 | Classroom behavior identification method, device, equipment and storage medium |
CN113743263B (en) * | 2021-08-23 | 2024-02-13 | 华中师范大学 | Teacher nonverbal behavior measurement method and system |
CN113792626A (en) * | 2021-08-30 | 2021-12-14 | 华中师范大学 | Teaching process evaluation method based on teacher non-verbal behaviors |
CN115239527B (en) * | 2022-06-27 | 2024-05-07 | 重庆市科学技术研究院 | Teaching behavior analysis system based on knowledge base teaching feature fusion and modeling |
CN115412679B (en) * | 2022-08-23 | 2023-06-27 | 国网浙江省电力有限公司培训中心 | Interactive teaching quality assessment system with direct recording and broadcasting function and method thereof |
CN115907507B (en) * | 2022-10-13 | 2023-11-14 | 华中科技大学 | Student class behavior detection and learning analysis method combined with class scene |
CN115619279A (en) * | 2022-11-03 | 2023-01-17 | 华中师范大学 | Classroom real-record digital course resource quality evaluation method and system |
CN115829234A (en) * | 2022-11-10 | 2023-03-21 | 武汉天天互动科技有限公司 | Automatic supervision system based on classroom detection and working method thereof |
CN116029871A (en) * | 2023-01-04 | 2023-04-28 | 广州市慧诚计算机***科技有限公司 | Visual management method and system for intelligent classroom |
CN116596719B (en) * | 2023-07-18 | 2023-09-19 | 江西科技学院 | Computer room computer teaching quality management system and method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648757B (en) * | 2018-06-14 | 2020-10-16 | 北京中庆现代技术股份有限公司 | Analysis method based on multi-dimensional classroom information |
CN109359606A (en) * | 2018-10-24 | 2019-02-19 | 江苏君英天达人工智能研究院有限公司 | A kind of classroom real-time monitoring and assessment system and its working method, creation method |
CN109800663A (en) * | 2018-12-28 | 2019-05-24 | 华中科技大学鄂州工业技术研究院 | Teachers ' teaching appraisal procedure and equipment based on voice and video feature |
-
2019
- 2019-06-14 CN CN201910516795.4A patent/CN110334610B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110334610A (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334610B (en) | Multi-dimensional classroom quantification system and method based on computer vision | |
WO2020082971A1 (en) | Real-time classroom monitoring and evaluation system and operation and creation method thereof | |
WO2019028592A1 (en) | Teaching assistance method and teaching assistance system using said method | |
CN111242962A (en) | Method, device and equipment for generating remote training video and storage medium | |
CN110619460A (en) | Classroom quality assessment system and method based on deep learning target detection | |
Hieu et al. | Identifying learners’ behavior from videos affects teaching methods of lecturers in Universities | |
CN113486744A (en) | Student learning state evaluation system and method based on eye movement and facial expression paradigm | |
CN108304779B (en) | Intelligent regulation and control method for student education management | |
CN111325853B (en) | Remote coaching system and method based on augmented reality glasses | |
CN112419809A (en) | Intelligent teaching monitoring system based on cloud data online education | |
CN112634096A (en) | Classroom management method and system based on intelligent blackboard | |
CN112597813A (en) | Teaching evaluation method and device and computer readable storage medium | |
CN115829234A (en) | Automatic supervision system based on classroom detection and working method thereof | |
CN115797829A (en) | Online classroom learning state analysis method | |
CN111652045B (en) | Classroom teaching quality assessment method and system | |
Lu et al. | Recognition of students’ abnormal behaviors in English learning and analysis of psychological stress based on deep learning | |
CN112270264A (en) | Multi-party interactive teaching system | |
Molnár et al. | Innovative assessment technologies: Comparing ‘face-to-face’and game-based development of thinking skills in classroom settings | |
Whannell et al. | An evaluation of the use of an online demonstration school | |
CN116996722B (en) | Virtual synchronous classroom teaching system in 5G network environment and working method thereof | |
TWI750613B (en) | System and method for presenting performance of remote teaching | |
Wang | Students Status Analysis and Class Teaching Effect Evaluation System Based on the Analysis of Monitoring Video | |
Zhou et al. | Check for updates A Model for Analyzing the Behavior of Classroom Teacher-Student Interaction Based on Deep Learning | |
Parvathy et al. | A Review on Students Attention Monitoring Systems | |
Qiusha et al. | Automatic Classification of Instructional Video Based on Different Presentation Forms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |