CN116994465A - Intelligent teaching method, system and storage medium based on Internet - Google Patents

Intelligent teaching method, system and storage medium based on Internet Download PDF

Info

Publication number
CN116994465A
CN116994465A CN202311096931.1A CN202311096931A CN116994465A CN 116994465 A CN116994465 A CN 116994465A CN 202311096931 A CN202311096931 A CN 202311096931A CN 116994465 A CN116994465 A CN 116994465A
Authority
CN
China
Prior art keywords
student
video stream
stream data
teacher
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202311096931.1A
Other languages
Chinese (zh)
Inventor
李君英
王瑞芳
蔡霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Puyang Vocational and Technical College
Original Assignee
Puyang Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Puyang Vocational and Technical College filed Critical Puyang Vocational and Technical College
Priority to CN202311096931.1A priority Critical patent/CN116994465A/en
Publication of CN116994465A publication Critical patent/CN116994465A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of data processing, and particularly relates to an intelligent teaching method, system and storage medium based on the Internet. The invention provides the method for selecting the video stream data at the student end without displaying the video stream data at the teacher terminal, reduces the data transmission quantity, so that the lesson is smoother without being blocked, and simultaneously helps a teacher to master the learning state of the student in time by analyzing the student image of the lesson of the student.

Description

Intelligent teaching method, system and storage medium based on Internet
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to an intelligent teaching method, system and storage medium based on the Internet.
Background
The internet technology is permeated into aspects of life, various kinds of teaching such as english are no exception, students and teachers need to acquire the video of lessons of students through the internet when the students and teachers are in class through the internet, the students' learning states are timely focused by watching the video of lessons of the students, but under the condition that students are relatively more, a large amount of video data are sent through the network to cause excessive network load, the lessons can be blocked, the lectures and learning effects can be greatly influenced by blocking for the subjects with strong front-rear relevance such as english, the students can not be heard by the students due to blocking when the teacher speaks the previous words or phrases, the learning states of the students can not be influenced by the fact that the students can not be heard by the words or phrases with the front-rear use, the efficiency of the lessons is greatly reduced, in the course of the lessons is not focused on like the lower lessons, especially the english teaching can not be timely performed, and the feedback of the students can not be obtained, and the learning of the subsequent students can be influenced.
In order to solve these problems, the present invention provides an interactive teaching method, system, device and storage medium for visual sharing, the online education method comprises: the first intelligent glasses collect first videos in real time, and the second intelligent glasses collect second videos in real time; one of the first smart glasses and the second smart glasses transmits the acquired video to the other. According to the invention, intelligent glasses capable of recording video pictures and playing videos can be worn by a teacher and a student respectively, so that the teacher and the student can share own visual angles and experience the visual angles of the other side in real time, corresponding interaction prompts are provided, and the operation consistency of the teacher and the student is guided; in the similar prior art, as in the Chinese invention patent CN114690982B, the invention relates to an intelligent teaching method for physical teaching, and relates to the technical field of physical teaching, and a handwriting recognition area is firstly generated on a first image layer of a display plane; generating an blackboard writing display area on a second image layer of the display plane; the handwriting content of the teacher in the handwriting recognition area is converted into the printing body content, the printing body content is typeset and finally displayed in the blackboard writing display area, so that the character normalization of ppt teaching is reserved, the reading and later review of students are facilitated, the writing flexibility of blackboard teaching is reserved, the problem of blocking caused by more people in online class is not considered, and the problem that the teacher cannot intuitively pay attention to the learning state of each student in the class is not considered.
Disclosure of Invention
According to the invention, two choices of displaying the student video and not displaying the student video are provided at the teacher terminal, so that a teacher can select according to the current network condition, the phenomenon of clamping and standing during the class is prevented, the learning state of the student during the class is analyzed through the image analysis module, the teacher is helped to acquire the class state of the student in time, and the class effect is better.
In order to achieve the above object, the present invention provides an intelligent teaching method based on internet, which mainly comprises the following steps:
s1, forming a class list through a teacher terminal, sending the class list to a remote server, and sending the class list to each student terminal by the remote server, wherein the remote server also reminds the teacher and the students that the course is about to start when a preset time is before each class begins;
s2, after a lesson is started, a teacher terminal acquires teacher end video stream data and sends the teacher end video stream data to a remote server, a student terminal acquires student end video stream data and sends the student end video stream data to the remote server, the remote server stores the teacher end video stream data in a first memory and sends the teacher end video stream data to each student terminal, the student terminal acquires the teacher end video stream data and then allows students to watch the teacher end video stream data, and the remote server stores the student end video stream data in a second memory;
S3, the remote server acquires the student end video stream data, intercepts student images from the student end video stream data once every preset time, stores the student images in a third memory, acquires and analyzes the student images in the third memory, judges a student learning state according to an analysis result, and sends the student learning state to a teacher terminal;
and S4, providing two choices of displaying student videos and not displaying student videos at the teacher terminal, and providing the student videos of all students or the student videos of some students to be selected and displayed under the condition of selecting and displaying the student videos, wherein the remote server sends the corresponding student end video stream data to the teacher terminal according to the choice of the teacher terminal.
As a preferred technical solution of the present invention, the process of acquiring and analyzing the student images in the third memory by the image analysis module is characterized by comprising the following steps:
s31, sequentially acquiring preset student images from the third memory, firstly analyzing the face orientation of the student images to acquire whether the face orientation is right opposite to a screen or not, and under the condition that neither the face orientation of the preset student images is right opposite to the screen, indicating that the attention of students is not in learning, transmitting an analysis result to the teacher terminal, otherwise, continuously executing S32;
S32, the image analysis module analyzes the eye gazing direction of the student images, and sends the analysis result to the teacher terminal under the condition that none of the eye gazing directions of the preset proportional student images is right opposite to a screen, otherwise, the S33 is continuously executed;
s33, the image analysis module analyzes the facial expression of the student image, judges the learning state of the student according to the facial expression, and sends the learning state of the student to the teacher terminal.
As a preferred technical solution of the present invention, a process for analyzing a face orientation of a student image is characterized by comprising the steps of:
s311, firstly analyzing the face orientation in the horizontal direction, pre-training a face feature recognition model, recognizing a face and face features by using the face feature recognition model, establishing an external graph based on the face, making corresponding marks in the external graph based on the face features, and drawing a first mark point, a second mark point and a third mark point;
s312, connecting the first mark points and the second mark points to form a first connecting line C1, drawing a second connecting line C2 perpendicular to the connecting line C1 along the third mark points, drawing a third connecting line C3 parallel to the second connecting line C2 along the first mark points, drawing a fourth connecting line C4 parallel to the second connecting line C2 along the first mark points, and obtaining the distance R1 between the third connecting line C3 and the left frame of the external graph, and obtaining the distance R2 between the fourth connecting line C4 and the right frame of the external graph;
S313, comparing the distance R1 with the distance R2, wherein when the distance R1 is larger than the distance R2, the distance R1 is subtracted from the distance R2 to obtain a distance difference R1, the distance difference R1 is divided by the distance R2 to obtain a first value, and when the distance R1 is smaller than the distance R2, the distance R2 is subtracted from the distance R2 to obtain a distance difference R2, and the distance difference R2 is divided by the distance R1 to obtain a second value;
s314, comparing the first numerical value or the second numerical value with a preset first threshold value, classifying that the face orientation is not right against the screen when the first numerical value or the second numerical value is larger than the first threshold value, and continuously analyzing the face orientation in the vertical direction when the first numerical value or the second numerical value is smaller than the first threshold value. As a preferred technical solution of the present invention, the process of analyzing the face orientation in the vertical direction is characterized by comprising the following steps:
s3141, obtaining a distance R3 between the first connecting line C1 and a frame on the external graph, subtracting a preset second threshold value from the distance R3 to obtain a difference value when the distance R3 is not equal to the preset second threshold value, obtaining R3 from the absolute value of the difference value, and dividing the R3 by the second threshold value to obtain a third value;
And S3142, classifying that the face orientation is not right facing the screen when the third value is larger than a preset third threshold value, and classifying that the face orientation is right facing the screen when the third value is smaller than the third threshold value.
As a preferred technical solution of the present invention, the process of analyzing the eye gazing direction by the image analysis module is characterized by comprising the following steps:
s321, recognizing an eye contour by using the face feature recognition model, drawing a corresponding connection graph according to the eye contour, drawing a first recognition point, a second recognition point and a third recognition point in the connection graph according to pupil positions and left and right end point positions of eyes respectively, connecting the first recognition point and the second recognition point to form a first line segment, connecting the first recognition point and the third recognition point to form a second line segment, and connecting the second recognition point and the third recognition point to form a third line segment;
s322, firstly judging an eye gazing direction in a vertical direction, calculating a first included angle formed by the first line segment and the third line segment, calculating a second included angle formed by the second line segment and the third line segment, calculating an average angle of the first included angle and the second included angle, comparing the average angle with a preset first angle, classifying the eye gazing direction as deviating from a screen when the average angle is larger than the first angle, classifying the eye gazing direction as being opposite to the screen when the average angle is smaller than the first angle, continuing to execute S323, and judging the eye gazing direction in a horizontal direction;
S323, dividing the third line segment into three equal parts, namely a first equal part, a second equal part and a third equal part respectively according to the direction from left to right, drawing a vertical line to the third line segment along the first identification point to obtain an intersection point of the vertical line and the third line segment, classifying that the eye gazing direction deviates from a screen when the intersection point falls on the first equal part or the third equal part, and classifying that the eye gazing direction is right opposite to the screen when the intersection point falls on the second equal part.
As a preferred technical solution of the present invention, the process of analyzing facial expressions by the image analysis module is characterized by comprising the following steps:
s331, presetting a mapping table according to different facial actions and different emotion values corresponding to different facial expressions, wherein the emotion values represent the emotion of the current expression;
s332, training a face action detection model in advance, detecting the face action of the student image through the face action detection model, and searching the corresponding facial expression and the corresponding emotion value from the mapping table according to the identified action;
s333, each time a student image is detected, storing the corresponding facial expression and the corresponding emotion value, after the detection of the preset student image, adding the corresponding emotion values to obtain a total emotion value, comparing the total emotion value with a preset fourth threshold value, and under the condition that the total emotion value is larger than the fourth threshold value, indicating that the current student emotion state is more positive, and under the condition that the total emotion value is smaller than the fourth threshold value, indicating that the current student emotion state is more negative.
The invention also provides an intelligent teaching system based on the Internet, which comprises the following modules:
the remote server module is used for sending the class list to each student terminal, reminding teachers and students that courses start when a preset time is before each class starts, receiving video stream data of the teacher end, storing the video stream data in a first memory, sending the video stream data to each student terminal module, receiving video stream data of the student end and storing the video stream data in a second memory;
the teacher terminal module is used for forming a class list for teachers and students, establishing communication connection with a remote server, sending the class list to the remote server after class arrangement, collecting teacher video stream data in the course of teaching, and sending the teacher video stream data to the remote server;
the student terminal module is used for collecting student end video stream data in the course of lesson, establishing communication connection with a remote server, sending the student end video stream data to the remote server module, intercepting student images from the student end video stream data once every preset time, and storing each student information and each corresponding student image in a third memory;
And the image analysis module is used for analyzing the student images, judging the learning state of the students according to the analysis result and sending the learning state of the students to the teacher terminal.
The present invention also provides a storage medium storing program instructions, wherein the program instructions, when executed, control a device in which the storage medium is located to perform any one of the methods described above.
Compared with the prior art, the invention has the following beneficial effects:
in the invention, lessons are arranged through the teacher terminal to form a lesson list and are sent to each student terminal through the remote server, in the course of lesson, the teacher terminal and the student terminals send respective acquired video streams through the remote server for viewing by the other party, the student terminals intercept and store student images from the video stream data of the student terminals every preset time, the image analysis module acquires and analyzes the student images, judges the learning state of the students according to the analysis result and sends the learning state to the teacher terminal, and two choices of displaying student videos and not displaying student videos are provided at the teacher terminal. The invention provides the selection that the video stream data of the student end is not displayed on the teacher terminal, and a teacher can select to not display the video stream data of the student end under the condition of poor network conditions, so that the data transmission quantity is reduced, the network load is reduced, the lesson is not blocked more smoothly, and meanwhile, the teacher is helped to master the learning state of the student in time by analyzing the student image of the lesson of the student.
Drawings
FIG. 1 is a flow chart of steps of an intelligent teaching method based on the Internet of the present application;
FIG. 2 is a block diagram of the intelligent teaching system based on the Internet of the present application;
fig. 3 is a schematic diagram of the analysis of eye gaze direction according to the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another element. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of this disclosure.
The application provides an intelligent teaching method based on the Internet as shown in fig. 1, which is realized mainly by executing the following steps:
S1, forming a class list by arranging classes through teacher terminals, sending the class list to a remote server, sending the class list to each student terminal by the remote server, and reminding teachers and students that courses are about to start when a preset time is before each class starts by the remote server;
s2, after a lesson is started, the teacher terminal obtains teacher end video stream data and sends the teacher end video stream data to a remote server, the student terminal obtains student end video stream data and sends the student end video stream data to the remote server, the remote server stores the teacher end video stream data in a first memory and sends the teacher end video stream data to each student terminal, the student terminal obtains the teacher end video stream data for a student to watch, and the remote server stores the student end video stream data in a second memory;
s3, the remote server acquires student end video stream data, intercepts student images from the student end video stream data once every preset time, stores the student images in a third memory, acquires and analyzes the student images in the third memory, judges student learning states according to analysis results, and sends the student learning states to the teacher terminal;
and S4, providing two choices of displaying student videos and not displaying student videos at the teacher terminal, and providing the student videos of all students or the student videos of some students to be selected and displayed under the condition of selecting and displaying the student videos, wherein the remote server sends corresponding student end video stream data to the teacher terminal according to the choice of the teacher terminal.
Specifically, in order to realize an intelligent teaching system based on the Internet, for example, when in class on an English network, a class list is firstly generated for teachers and students through teacher terminals, the class list is sent to a remote server, the remote server sends the class list to each student terminal after receiving the class list, the remote server reminds the English teacher and the students that the course is about to start before starting the class, the teacher terminal obtains teacher end video stream data and sends the teacher end video stream data to the remote server in the class process, the remote server sends the teacher end video stream data to each student terminal for watching and learning by students, the remote server also stores the teacher end video stream data in a first memory, provides data for watching and playing back by the students later, the student terminal obtains the student end video stream data and sends the remote server to store the student end video stream data in a second memory, because each student end video stream data is sent to a remote server, and then each student end video stream data is sent to a teacher terminal by the remote server, a large data transmission amount can be generated, the lesson blocking can be caused to influence the lesson effect under the condition of poor network condition, especially when the lesson blocking occurs in the case of a subject with strong knowledge relevance such as English, students can not keep up with the teaching progress taught by English teachers in time, the later learning can also be greatly influenced, the invention provides two choices for respectively displaying student videos and not displaying student videos at the teacher terminal, the teacher can select not to display student videos under the condition of poor network condition, select to display all student videos or select to display part of student videos under the condition of good network condition, and the remote server can select to display all student videos according to the selection condition of the teacher terminal, transmitting corresponding student-side video stream data to a teacher terminal, selecting not to watch student videos in case of bad network conditions, at this time, a teacher cannot grasp learning states of students, even if the network conditions are good, the teacher selects to watch all student videos, but the teacher does not have time and energy to pay attention to learning states of each student in most cases while giving lessons to students, so that after receiving the video stream data of students, a remote server intercepts learning images of each student every preset time and stores the learning images, analyzes each learning image by an image analysis module, judges the learning states of the students according to analysis results, for example, displays that fifty percent or more than fifty percent of student images show that face orientation of the student is not facing a screen or face orientation is facing the screen but eye gaze direction is not facing the screen to indicate that the student is in a small difference or disturbed, the attention is not in the class, the attention of the student is not sent to the teacher terminal in the class, the teacher reminds the student, the learning state of the student is corrected in time, the student is more concentrated in class, if the face orientation and the eye fixation direction of fifty percent or more are opposite to the screen but the facial expression of the student presents a negative emotion expression such as a question for most of the time, the learning state of the student is not opposite, the student possibly has a question about the teacher at the moment, if the teacher cannot acquire the learning state of the student in time to self-care, the teaching effect is greatly reduced, especially the English subject is not learned or not learned well, the learning effect of the later is influenced, because the relevance around english is stronger, the word or sentence of general preceding study is based on the word or sentence or article of later study degree of difficulty higher, if preceding study can not, later just learn better, so, through sending the analysis result in time to teacher's terminal and reminding mr, mr can be according to student's study state adjustment lecture mode or in time with student interaction, make the lecture effect better, obtain student's study state through the analysis, and send the study state to teacher's terminal and remind mr, help mr in time obtain the study state of each student, make mr also in time focus on the study state when concentrating on the class, in time remind the student that the study state is not good, make the lecture effect better, student's learning efficiency is higher.
Further, the process of acquiring and analyzing the student images in the third memory by the image analysis module is characterized by comprising the following steps:
s31, sequentially acquiring preset student images from a third memory, firstly analyzing the face orientation of the student images to acquire whether the face orientation is right opposite to a screen or not, and under the condition that the face orientation of the preset proportional student images is not right opposite to the screen, indicating that the attention of students is not in study, transmitting an analysis result to a teacher terminal, otherwise, continuously executing S32;
s32, the image analysis module analyzes the eye gazing direction of the student images, and sends the analysis result to the teacher terminal under the condition that the eye gazing direction of the preset proportional student images is not right opposite to the screen, otherwise, the S33 is continuously executed;
s33, the image analysis module analyzes the facial expression of the student image, judges the learning state of the student according to the facial expression, and sends the learning state of the student to the teacher terminal.
Specifically, in order to help a teacher grasp the learning state of the student in time, the student terminal intercepts the student images from the video stream data at the student end every preset time and stores the student images, the learning condition of the student in a period of time can be obtained by analyzing the preset student images, firstly, the face orientation of the student is analyzed, if the face orientation of the student images which is larger than the preset proportion is found to be not right opposite to the screen after the preset learning images are analyzed, such as a total hundred images, wherein the face orientation of fifty or more than fifty is not right opposite to the screen, the condition that the attention of the student is not in the class in the screenshot period is possibly disturbed or bad is illustrated, the analysis result is sent to the teacher terminal to remind the teacher of paying attention to the learning state of the student, otherwise, the face directions of the student images which are possibly analyzed are opposite to the screen or most of the face directions are opposite to the screen, for example, fifty percent or more than eighty percent, at the moment, the attention of the students cannot be directly judged to be in a classroom, so that the image analysis module is continuously used for analyzing the eye gazing directions of the students on each image, judging whether the gazing directions of the students deviate from the screen, for example, the eye gazing directions of the student images with fifty percent or more than fifty percent are not opposite to the screen, indicating that the attention of the students are not in the classroom, sending the analysis result to a teacher terminal to remind a teacher, otherwise, indicating that the eye gazing directions of most of the student images are opposite to the screen, at the moment, the study state of the students cannot be directly judged, and questions or other situations are possibly problematic, so that the face expression of each image is continuously analyzed by the image analysis module, the study state of the students is judged according to the face expression, for example, the facial expression of the student is fifty percent or more than fifty percent of the expression of the negative emotion such as the question, the explanation may be that the knowledge student who talks at the moment is not very understood, the teacher terminal is timely sent the student learning state to remind the teacher to pay attention to the learning state of the student, and the teacher can adjust the teaching method or interact with the student to enable teaching to be more effective.
Further, the process of face orientation analysis of the student image includes the following steps:
s311, firstly analyzing the face orientation in the horizontal direction, pre-training a face feature recognition model, recognizing the face and the face features by the face feature recognition model, establishing an external graph based on the face, making corresponding marks in the external graph based on the face features, and drawing out a first mark point, a second mark point and a third mark point;
s312, connecting the first mark point and the second mark point to form a first connecting line C1, drawing a second connecting line C2 perpendicular to the connecting line C1 along the third mark point, drawing a third connecting line C3 parallel to the second connecting line C2 along the first mark point, drawing a fourth connecting line C4 parallel to the second connecting line C2 along the second mark point, obtaining the distance R1 between the third connecting line C3 and the left frame of the external graph, and obtaining the distance R2 between the fourth connecting line C4 and the right frame of the external graph;
s313, comparing the distance R1 with the distance R2, obtaining a distance difference R1 by subtracting the distance R2 from the distance R1 when the distance R1 is larger than the distance R2, dividing the distance difference R1 by the distance R2 to obtain a first value, and obtaining a distance difference R2 by subtracting the distance R1 from the distance R2 when the distance R1 is smaller than the distance R2, dividing the distance R2 by the distance R1 to obtain a second value;
S314, comparing the first numerical value or the second numerical value with a preset first threshold value respectively, classifying that the face orientation is not right facing the screen under the condition that the first numerical value or the second numerical value is larger than the first threshold value, and continuously analyzing the face orientation in the vertical direction under the condition that the first numerical value or the second numerical value is smaller than the first threshold value.
Specifically, in order to analyze the face orientation, the face orientation is divided into a horizontal direction change and a vertical direction change, the horizontal direction is analyzed first, a face feature recognition model is trained in advance by using a machine learning model, the face feature recognition model is used for recognizing the face and the face feature, an external graph is built based on the face, corresponding marks are made in the external graph based on the face feature, a first mark point, a second mark point and a third mark point are drawn, wherein the first mark point is a left eye outer corner endpoint, the second mark is a right eye outer corner endpoint, the third mark is a nose tip center point, the first mark point and the second mark point are connected by a first connecting line C1, a second connecting line C2 perpendicular to the first connecting line C1 is drawn along the third mark point, a third connecting line C3 parallel to the second connecting line C2 is drawn along the first mark point, a fourth connecting line C4 parallel to the second connecting line C2 is stippled along the second mark, the distance R1 between the third connecting line C3 and the left border of the external graph is obtained, the distance R2 between the fourth connecting line C4 and the right border of the external graph is obtained, when the human face is right facing the screen, R1 is equal to R2 because the human face is symmetrical, when the human face is three-dimensional and changes leftwards, the left eye corner and the left border of the human face are right facing the screen along with the change, the R1 becomes larger R2 becomes smaller, and similarly, when the human face changes rightwards, the right eye corner and the right border of the human face are right facing the screen along with the change, the R1 becomes smaller and the R2 becomes larger, so that when the distance R1 is larger than the distance R2, the human face is right facing, the distance R1 is obtained by subtracting the distance R2, the first numerical value is calculated by dividing the distance R1 by the R2 and is taken as the change amplitude of the horizontal direction face facing the human face, when the distance R1 is smaller than the distance R2, the face orientation is changed leftwards, the distance R2 is obtained by subtracting the distance R1 from the distance R2, a second numerical value is obtained by dividing the distance R1 by the distance R2 to serve as the horizontal face orientation changing amplitude, the first numerical value or the second numerical value is compared with a preset first threshold value, and when the first numerical value or the second numerical value is larger than the first threshold value, the face orientation changing amplitude is excessively large, the face orientation is classified as not facing the screen, otherwise, the face orientation changing amplitude is not large in the horizontal direction, and the vertical face orientation is continuously analyzed.
Further, the process of analyzing the face orientation in the vertical direction includes the following steps:
s3141, obtaining a distance R3 between a first connecting line C1 and a frame on an external graph, subtracting a preset second threshold value from the distance R3 to obtain a difference value when the distance R3 is not equal to the preset second threshold value, obtaining R3 by taking an absolute value of the difference value, and obtaining a third numerical value by dividing the R3 by the second threshold value;
and S3142, classifying that the face direction is not opposite to the screen when the third value is larger than a preset third threshold value, and classifying that the face direction is opposite to the screen when the third value is smaller than the third threshold value.
Specifically, the distance R3 between the first connecting wire C1 and the upper border of the external graph is obtained, the distance between the first connecting wire C1 and the upper border of the external graph is taken as a second threshold value when the face is over against the screen, the change of the face orientation in the vertical direction is described under the condition that the distance R3 is unequal to the preset second threshold value, the difference value is obtained by subtracting the preset second threshold value from the distance R3, R3 is obtained by taking the absolute value of the difference value, the third value is obtained by dividing the second threshold value, the third value is taken as the change amplitude of the face orientation in the vertical direction, the face orientation is classified as not over against the screen under the condition that the third value is larger than the preset third threshold value, the face orientation is classified as over against the screen under the condition that the third value is smaller than the third threshold value, the face orientation is over against the screen, the distance R4 of the border under the first connecting wire and the external graph can be obtained as the third threshold value, the face orientation is not over against the screen under the condition that the face is unequal to the preset third threshold value, the face orientation in the vertical direction is not over against the screen is not larger than the threshold value, the condition that the face orientation is not over against the fourth threshold value is obtained by subtracting the preset from the preset distance R4, the face orientation in the vertical direction is not over against the face to the face of the face, the face is not longer than the threshold value, the face is not continuously obtained under the condition that the threshold value is not over the threshold value is larger than the threshold value, and the threshold value is not is obtained under the condition that the threshold value is larger than the threshold value is not over the threshold value, the threshold value is not over against the face over the face is continuously, at the moment, the learning state of the student is sent to the teacher terminal to remind a teacher, the teacher can take some measures to enable the student to listen to the lessons, otherwise, the face directions of the possibly analyzed student images are all right opposite to the screen or more than fifty percent of the faces are right opposite to the screen, and at the moment, the attention of the student cannot be directly judged to be in a class, so that the eye gaze direction of each student image is continuously analyzed by the image analysis module.
Further, the process of analyzing the eye gazing direction by the image analysis module comprises the following steps:
s321, recognizing an eye boundary by using a face feature recognition model, drawing a corresponding connection graph according to the eye boundary, respectively drawing a first recognition point, a second recognition point and a third recognition point in the connection graph according to pupil positions and left and right end point positions of the eye, connecting the first recognition point and the second recognition point to form a first line segment, connecting the first recognition point and the third recognition point to form a second line segment, and connecting the second recognition point and the third recognition point to form a third line segment;
s322, firstly judging the eye gazing direction in the vertical direction, calculating a first included angle formed by the first line segment and the third line segment, calculating a second included angle formed by the second line segment and the third line segment, calculating the average angle of the first included angle and the second included angle, comparing the average angle with a preset first angle, classifying the eye gazing direction as deviating from the screen when the average angle is larger than the first angle, classifying the eye gazing direction as being opposite to the screen when the average angle is smaller than the first angle, continuing to execute S323, and judging the eye gazing direction in the horizontal direction;
S323, equally dividing the third line segment into three equal parts, respectively called a first equal part, a second equal part and a third equal part according to the direction from left to right, drawing a vertical line to the third line segment along the first identification point to obtain an intersection point of the vertical line and the third line segment, classifying that the eye gazing direction deviates from a screen when the intersection point falls on the first equal part or the third equal part, and classifying that the eye gazing direction is right opposite to the screen when the intersection point falls on the second equal part.
Specifically, in order to analyze the eye gazing direction, a face feature recognition model is used to recognize the eye boundary, and a corresponding connection graph is drawn according to the eye boundary, as shown in fig. 3, in order to analyze the eye gazing direction, a first recognition point, a second recognition point and a third recognition point are respectively drawn in the connection graph according to the pupil position and the left and right end positions of the eye, the first recognition point and the second recognition point are connected to form a first line segment, the first recognition point and the third recognition point are connected to form a second line segment, the second recognition point and the third recognition point are connected to form a third line segment, firstly, the eye gazing direction in the vertical direction is judged, a first included angle formed by the first line segment and the third line segment is calculated, a second included angle formed by the second line segment and the third line segment is calculated, the average angle of the first included angle and the second included angle is calculated, and the average angle and the size of the preset first angle are compared, for example, the first angle is thirty degrees, when the average value is larger than the first angle, the angle for the eyes to look up or look down is larger, so that the angle is classified as the eye gaze direction deviates from the screen, when the average value is smaller than the first angle, the angle is classified as the eye gaze direction is right opposite to the screen, the eye gaze direction in the horizontal direction is continuously judged, the third line segment is divided into three equal parts, namely a first equal part, a second equal part and a third equal part according to the left-to-right direction, a perpendicular line is drawn to the third line segment along the first identification point, the intersection point of the perpendicular line and the third line segment is obtained, when the intersection point falls in the first equal part or the third equal part, the angle for the eyes to look left or right is larger, the angle is classified as the eye gaze direction deviates from the screen, when the intersection point falls in the second equal part, the method is classified as that the eye gazing direction is right opposite to the screen, and under the condition that the face of the student image is right opposite to the screen after analysis, the eye gazing direction is also right opposite to the screen, the learning state of the student is directly judged to be inaccurate, and in order to more accurately judge the learning state of the student, the facial expression of the student image is continuously analyzed.
Further, the process of analyzing the facial expression by the image analysis module comprises the following steps:
s331, presetting a mapping table according to different facial expressions corresponding to different facial actions and different emotion values, wherein the emotion values represent the emotion of the current expression;
s332, training a face action detection model in advance, detecting the face action of the student image through the face action detection model, and searching the corresponding facial expression and the corresponding emotion value from the mapping table according to the identified action;
s333, each time a student image is detected, storing the corresponding facial expression and the corresponding emotion value, after the detection of the preset student image, adding the corresponding emotion values to obtain a total emotion value, comparing the total emotion value with a preset fourth threshold value, and under the condition that the total emotion value is larger than the fourth threshold value, indicating that the current student emotion state is more positive, and under the condition that the total emotion value is smaller than the fourth threshold value, indicating that the current student emotion state is more negative.
Specifically, in the case where the face orientation direction of the student image of the preset ratio is not just right against the screen, it can be determined that the student is not focusing on the course for this time, but if the face orientation direction of the student image of the preset ratio is right against the screen, for example, fifty percent or more, it cannot be determined that the student is not focusing on the classroom or not precisely by just the face orientation, the learning state of the student is determined by combining the determination of the facial expression of the student image to be more precise, in order to determine the facial expression of the student image, it is known that the facial expression is exhibited by facial movements including sagging of the eyebrows, lifting of the inner sides of the eyebrows, lifting of the upper eyelids, tightening of the eyelids, lifting of the cheeks, closing of the eyelids, squinting, blinking, winking nose, lifting of the upper lips, lifting of the lips, sagging of the lips, zhang Chun, the chin sags and opens lips, opens large mouth, lips edge up, beeps, closes lips, etc., different facial expressions correspond to a plurality of different facial actions, such as facial actions corresponding to the open heart expression are cheek up and lips edge up, facial actions corresponding to the confused expression are brow fold up, eyebrow inner side up and squint eyes, different facial expressions also correspond to different emotions, emotion values of positive emotion such as open heart, confidence, etc. are set as positive ones, emotion values of negative emotion such as heart, doubt, depression, etc. are set as negative ones, a mapping table of facial expression, facial action and emotion values is set in advance, a facial action detection model is trained in advance, the facial action corresponding to each picture is detected through the facial action detection model, according to the recognized actions, searching corresponding facial expressions and corresponding emotion values from an expression action mapping table, storing the corresponding facial expressions and the corresponding emotion values every time a student image is detected, adding the corresponding emotion values to obtain a total emotion value after the detection of the preset student image, comparing the total emotion value with a preset fourth threshold value, and under the condition that the total emotion value is larger than the fourth threshold value, indicating that the current student is more active in the emotion overall period, and indicating that the current student is more passive in the emotion overall period and possibly in the questionable or sad emotion under the condition that the total emotion value is smaller than the fourth threshold value, wherein the current student is more active or concentrated in the emotion overall period, and the teacher is required to pay attention to the important point.
According to another aspect of the embodiment of the present invention, referring to fig. 2, there is further provided an intelligent teaching system based on internet, including a remote server module, a teacher terminal module, a student terminal module, and an image analysis module, for implementing an intelligent teaching method based on internet as described above, the specific functions of each module are as follows:
the remote server module is used for sending the class list to each student terminal, reminding teachers and students that courses start when a preset time is before each class starts, receiving video stream data of the teacher end, storing the video stream data in a first memory, sending the video stream data to each student terminal module, receiving video stream data of the student end and storing the video stream data in a second memory;
the teacher terminal module is used for forming a class list for teachers and students, establishing communication connection with the remote server, sending the class list to the remote server after class arrangement, collecting video stream data of a teacher end in the course of teaching, and sending the video stream data of the teacher end to the remote server;
the student terminal module is used for collecting student end video stream data in the course of lesson, establishing communication connection with the remote server, sending the student end video stream data to the remote server module, intercepting student images from the student end video stream data every preset time, and storing each student information and each corresponding student image in the third memory;
And the image analysis module is used for analyzing the student images, judging the learning state of the students according to the analysis result and sending the learning state of the students to the teacher terminal.
According to another aspect of the embodiment of the present invention, there is also provided a storage medium storing program instructions, where the program instructions, when executed, control a device in which the storage medium is located to perform the method of any one of the above.
In summary, the method includes that a lesson list is formed through a teacher terminal, the lesson list is sent to each student terminal through a remote server, in the course of lesson, the teacher terminal and the student terminals send video streams acquired respectively through the remote server for viewing by the other party, the student terminals intercept student images from the video stream data at the student ends every preset time and store the student images, an image analysis module acquires and analyzes the student images, a student learning state is judged according to analysis results and sent to the teacher terminal, and two choices of displaying student videos and not displaying the student videos are provided at the teacher terminal. The invention provides the selection that the video stream data of the student end is not displayed on the teacher terminal, and a teacher can select to not display the video stream data of the student end under the condition of poor network conditions, so that the data transmission quantity is reduced, the network load is reduced, the lesson is not blocked more smoothly, and meanwhile, the teacher is helped to master the learning state of the student in time by analyzing the student image of the lesson of the student.
It should be understood that, although the steps in the flowcharts of the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of computer programs, which may be stored on a non-transitory computer readable storage medium, and which, when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and for brevity, all of the possible combinations of the technical features of the above embodiments are not described, however, they should be considered as the scope of the description of the present specification as long as there is no contradiction between the combinations of the technical features.
The foregoing examples have been presented to illustrate only a few embodiments of the invention and are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (8)

1. An intelligent teaching method based on the Internet is characterized by comprising the following steps:
S1, forming a class list through a teacher terminal, sending the class list to a remote server, and sending the class list to each student terminal by the remote server, wherein the remote server also reminds the teacher and the students that the course is about to start when a preset time is before each class begins;
s2, after a lesson is started, a teacher terminal acquires teacher end video stream data and sends the teacher end video stream data to a remote server, a student terminal acquires student end video stream data and sends the student end video stream data to the remote server, the remote server stores the teacher end video stream data in a first memory and sends the teacher end video stream data to each student terminal, the student terminal acquires the teacher end video stream data and then allows students to watch the teacher end video stream data, and the remote server stores the student end video stream data in a second memory;
s3, the remote server acquires the student end video stream data, intercepts student images from the student end video stream data once every preset time, stores the student images in a third memory, acquires and analyzes the student images in the third memory, judges a student learning state according to an analysis result, and sends the student learning state to a teacher terminal;
And S4, providing two choices of displaying student videos and not displaying student videos at the teacher terminal, and providing the student videos of all students or the student videos of some students to be selected and displayed under the condition of selecting and displaying the student videos, wherein the remote server sends the corresponding student end video stream data to the teacher terminal according to the choice of the teacher terminal.
2. The intelligent teaching method based on the internet according to claim 1, wherein the process of obtaining and analyzing the student image in the third memory by the image analysis module is characterized by comprising the following steps:
s31, sequentially acquiring preset student images from the third memory, firstly analyzing the face orientation of the student images to acquire whether the face orientation is right opposite to a screen or not, and under the condition that neither the face orientation of the preset student images is right opposite to the screen, indicating that the attention of students is not in learning, transmitting an analysis result to the teacher terminal, otherwise, continuously executing S32;
s32, the image analysis module analyzes the eye gazing direction of the student images, and sends the analysis result to the teacher terminal under the condition that none of the eye gazing directions of the preset proportional student images is right opposite to a screen, otherwise, the S33 is continuously executed;
S33, the image analysis module analyzes the facial expression of the student image, judges the learning state of the student according to the facial expression, and sends the learning state of the student to the teacher terminal.
3. The intelligent teaching method based on the internet according to claim 2, wherein the process of face orientation analysis is performed on the student images, and the method is characterized by comprising the following steps:
s311, firstly analyzing the face orientation in the horizontal direction, pre-training a face feature recognition model, recognizing a face and face features by using the face feature recognition model, establishing an external graph based on the face, making corresponding marks in the external graph based on the face features, and drawing a first mark point, a second mark point and a third mark point;
s312, connecting the first mark points and the second mark points to form a first connecting line C1, drawing a second connecting line C2 perpendicular to the connecting line C1 along the third mark points, drawing a third connecting line C3 parallel to the second connecting line C2 along the first mark points, drawing a fourth connecting line C4 parallel to the second connecting line C2 along the first mark points, and obtaining the distance R1 between the third connecting line C3 and the left frame of the external graph, and obtaining the distance R2 between the fourth connecting line C4 and the right frame of the external graph;
S313, comparing the distance R1 with the distance R2, wherein when the distance R1 is larger than the distance R2, the distance R1 is subtracted from the distance R2 to obtain a distance difference R1, the distance difference R1 is divided by the distance R2 to obtain a first value, and when the distance R1 is smaller than the distance R2, the distance R2 is subtracted from the distance R2 to obtain a distance difference R2, and the distance difference R2 is divided by the distance R1 to obtain a second value;
s314, comparing the first numerical value or the second numerical value with a preset first threshold value, classifying that the face orientation is not right against the screen when the first numerical value or the second numerical value is larger than the first threshold value, and continuously analyzing the face orientation in the vertical direction when the first numerical value or the second numerical value is smaller than the first threshold value.
4. An intelligent teaching method based on internet according to claim 3, said process of analyzing face orientation in vertical direction, comprising the steps of:
s3141, obtaining a distance R3 between the first connecting line C1 and a frame on the external graph, subtracting a preset second threshold value from the distance R3 to obtain a difference value when the distance R3 is not equal to the preset second threshold value, obtaining R3 from the absolute value of the difference value, and dividing the R3 by the second threshold value to obtain a third value;
And S3142, classifying that the face orientation is not right facing the screen when the third value is larger than a preset third threshold value, and classifying that the face orientation is right facing the screen when the third value is smaller than the third threshold value.
5. The intelligent teaching method based on the internet according to claim 2, wherein the image analysis module analyzes the process of the eye gazing direction, and the method comprises the following steps:
s321, recognizing an eye contour by using the face feature recognition model, drawing a corresponding connection graph according to the eye contour, drawing a first recognition point, a second recognition point and a third recognition point in the connection graph according to pupil positions and left and right end point positions of eyes respectively, connecting the first recognition point and the second recognition point to form a first line segment, connecting the first recognition point and the third recognition point to form a second line segment, and connecting the second recognition point and the third recognition point to form a third line segment;
s322, firstly judging an eye gazing direction in a vertical direction, calculating a first included angle formed by the first line segment and the third line segment, calculating a second included angle formed by the second line segment and the third line segment, calculating an average angle of the first included angle and the second included angle, comparing the average angle with a preset first angle, classifying the eye gazing direction as deviating from a screen when the average angle is larger than the first angle, classifying the eye gazing direction as being opposite to the screen when the average angle is smaller than the first angle, continuing to execute S323, and judging the eye gazing direction in a horizontal direction;
S323, dividing the third line segment into three equal parts, namely a first equal part, a second equal part and a third equal part respectively according to the direction from left to right, drawing a vertical line to the third line segment along the first identification point to obtain an intersection point of the vertical line and the third line segment, classifying that the eye gazing direction deviates from a screen when the intersection point falls on the first equal part or the third equal part, and classifying that the eye gazing direction is right opposite to the screen when the intersection point falls on the second equal part.
6. The intelligent teaching method based on the internet according to claim 2, wherein the image analysis module analyzes the facial expression process, comprising the steps of:
s331, presetting a mapping table according to different facial actions and different emotion values corresponding to different facial expressions, wherein the emotion values represent the emotion of the current expression;
s332, training a face action detection model in advance, detecting the face action of the student image through the face action detection model, and searching the corresponding facial expression and the corresponding emotion value from the mapping table according to the identified action;
s333, each time a student image is detected, storing the corresponding facial expression and the corresponding emotion value, after the detection of the preset student image, adding the corresponding emotion values to obtain a total emotion value, comparing the total emotion value with a preset fourth threshold value, and under the condition that the total emotion value is larger than the fourth threshold value, indicating that the current student emotion state is more positive, and under the condition that the total emotion value is smaller than the fourth threshold value, indicating that the current student emotion state is more negative.
7. An intelligent teaching system based on internet for implementing the method according to any of claims 1-6, characterized by comprising the following modules:
the remote server module is used for sending the class list to each student terminal, reminding teachers and students that courses start when a preset time is before each class starts, receiving video stream data of the teacher end, storing the video stream data in a first memory, sending the video stream data to each student terminal module, receiving video stream data of the student end and storing the video stream data in a second memory;
the teacher terminal module is used for forming a class list for teachers and students, establishing communication connection with a remote server, sending the class list to the remote server after class arrangement, collecting teacher video stream data in the course of teaching, and sending the teacher video stream data to the remote server;
the student terminal module is used for collecting student end video stream data in the course of lesson, establishing communication connection with a remote server, sending the student end video stream data to the remote server module, intercepting student images from the student end video stream data once every preset time, and storing each student information and each corresponding student image in a third memory;
And the image analysis module is used for analyzing the student images, judging the learning state of the students according to the analysis result and sending the learning state of the students to the teacher terminal.
8. A storage medium storing program instructions, wherein the program instructions, when executed, control a device in which the storage medium is located to perform the method of any one of claims 1-6.
CN202311096931.1A 2023-08-29 2023-08-29 Intelligent teaching method, system and storage medium based on Internet Withdrawn CN116994465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311096931.1A CN116994465A (en) 2023-08-29 2023-08-29 Intelligent teaching method, system and storage medium based on Internet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311096931.1A CN116994465A (en) 2023-08-29 2023-08-29 Intelligent teaching method, system and storage medium based on Internet

Publications (1)

Publication Number Publication Date
CN116994465A true CN116994465A (en) 2023-11-03

Family

ID=88532108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311096931.1A Withdrawn CN116994465A (en) 2023-08-29 2023-08-29 Intelligent teaching method, system and storage medium based on Internet

Country Status (1)

Country Link
CN (1) CN116994465A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117708373A (en) * 2024-02-06 2024-03-15 成都数之联科技股份有限公司 Animation interaction method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117708373A (en) * 2024-02-06 2024-03-15 成都数之联科技股份有限公司 Animation interaction method and system
CN117708373B (en) * 2024-02-06 2024-04-05 成都数之联科技股份有限公司 Animation interaction method and system

Similar Documents

Publication Publication Date Title
CN111796752B (en) Interactive teaching system based on PC
CN110287790B (en) Learning state hybrid analysis method oriented to static multi-user scene
CN109344682A (en) Classroom monitoring method, device, computer equipment and storage medium
WO2021077382A1 (en) Method and apparatus for determining learning state, and intelligent robot
KR20100016696A (en) Student learning attitude analysis systems in virtual lecture
CN110580470A (en) Monitoring method and device based on face recognition, storage medium and computer equipment
CN116994465A (en) Intelligent teaching method, system and storage medium based on Internet
Wang et al. Automated student engagement monitoring and evaluation during learning in the wild
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
Hung et al. Augmenting teacher-student interaction in digital learning through affective computing
CN111178263B (en) Real-time expression analysis method and device
CN111967350A (en) Remote classroom concentration analysis method and device, computer equipment and storage medium
Shobana et al. I-Quiz: An Intelligent Assessment Tool for Non-Verbal Behaviour Detection.
CN116110091A (en) Online learning state monitoring system
JP7099377B2 (en) Information processing equipment and information processing method
CN109754653A (en) A kind of method and system of individualized teaching
CN113239794B (en) Online learning-oriented learning state automatic identification method
CN114581271A (en) Intelligent processing method and system for online teaching video
KR102245319B1 (en) System for analysis a concentration of learner
Hukkeri et al. Estimation of Engagement of Learners in MOOCs using Smart Visual Processing
Shah et al. Assessment of student attentiveness to e-learning by monitoring behavioural elements
Jain et al. Study for emotion recognition of different age groups students during online class
CN113536893A (en) Online teaching learning concentration degree identification method, device, system and medium
Qianqian et al. Research on behavior analysis of real-time online teaching for college students based on head gesture recognition
Ray et al. Design and implementation of affective e-learning strategy based on facial emotion recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20231103

WW01 Invention patent application withdrawn after publication