CN112990030A - Student emotion analysis system based on expression image - Google Patents

Student emotion analysis system based on expression image Download PDF

Info

Publication number
CN112990030A
CN112990030A CN202110308529.XA CN202110308529A CN112990030A CN 112990030 A CN112990030 A CN 112990030A CN 202110308529 A CN202110308529 A CN 202110308529A CN 112990030 A CN112990030 A CN 112990030A
Authority
CN
China
Prior art keywords
student
trunk
students
face
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110308529.XA
Other languages
Chinese (zh)
Other versions
CN112990030B (en
Inventor
范逸非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Lingjun Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110308529.XA priority Critical patent/CN112990030B/en
Publication of CN112990030A publication Critical patent/CN112990030A/en
Application granted granted Critical
Publication of CN112990030B publication Critical patent/CN112990030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a student emotion analysis system based on expression images, which comprises a server, a login verification module, an information input module, an image acquisition module, a face recognition module, an expression analysis module, a behavior analysis module, an inquiry module and an information push module, wherein the login verification module is used for storing information; obtain student's face picture and student's truck picture through image acquisition module, and carry out meshing processing with student's face picture and student's truck picture, then carry out facial key point mark and truck key point mark respectively with meshing student's face picture and student's truck picture, form student's face benchmark reference model and student's action benchmark reference model, carry out meshing processing through the developments truck picture to student's developments face picture and student, and carry out facial key point mark and truck key point mark respectively, and acquire the change of all facial key points and truck key point, thereby judge whether steady of student's mood.

Description

Student emotion analysis system based on expression image
Technical Field
The invention relates to the technical field of education management, in particular to a student emotion analysis system based on expression images.
Background
With the development of society, people have more and more abundant substance life, but the happiness of people is not continuously improved due to the satisfaction of the substance life, and on the contrary, more and more negative emotions appear in our life due to various pressures brought by social competition.
For students, in the learning process, the change of emotion has great influence on the learning and growth of the students, and the emotion change of the students is usually ignored; therefore, a student emotion analysis system based on expression images is provided.
Disclosure of Invention
The invention aims to provide a student emotion analysis system based on expression images, which is used for solving the problems in the background technology.
The purpose of the invention is realized by the following technical scheme: a student emotion analysis system based on expression images comprises a server, a login verification module, an information input module, an image acquisition module, a face recognition module, an expression analysis module, a behavior analysis module, a query module and an information push module;
the expression analysis module is used for analyzing dynamic face photos of students so as to obtain emotion changes of the students, and the specific analysis process comprises the following steps:
step N1: carrying out gridding processing on the dynamic face photos of all students, carrying out face key point labeling on the faces of the students, and simultaneously taking the noses of the students as first standard points;
step N2: comparing the facial key points on the dynamic face photos of the students with the facial key points in the face reference model;
step N3: respectively extracting facial key points in each dynamic face picture, taking the first standard point as a reference, obtaining the distance between each facial key point and the first standard point, obtaining the change of the distance between each facial key point and the first standard point in time TL, and generating a corresponding dynamic coordinate curve graph in the change process, wherein TL is more than 0;
step N4: marking the dynamic area of the dynamic coordinate graph in the TL time as Sm, substituting the Sm into a formula Sc-S0-alpha to obtain a fluctuation value Sc of the key points of the face, when the Sc is larger than 0, judging that the student has emotional fluctuation, wherein the larger the value of the Sc is, the larger the emotional fluctuation of the student is, and S0 is the reference area of the key points of the face preset by the system; alpha is a proportionality coefficient and alpha is more than 0;
the behavior analysis module is used for analyzing the dynamic behavior of the student, and judging the emotion change of the student through the change of the behavior of the student, and the specific process comprises the following steps:
step M1: carrying out gridding processing on the dynamic trunk photos of all students, carrying out trunk key point labeling on the trunk photos of the backs of the students and the trunk photos of the fronts of the students, and simultaneously taking the chest of the students as a second standard point;
step M2: comparing the trunk key points on the dynamic trunk picture of the student with trunk key points in the behavior benchmark reference model;
step M3: extracting trunk key points in each dynamic trunk picture, acquiring the distance between each trunk key point and the second standard point, acquiring the change of the distance between each trunk key point and the second standard point in time TQ, and generating a corresponding dynamic coordinate curve graph according to the change, wherein the TQ is more than 0;
step M4: marking the dynamic area of the dynamic coordinate curve graph in TL time as Sq, substituting the Sq into a formula Sd ═ Sq-S1 ×. beta to obtain a fluctuation value Sd of the key points of the trunk, judging that the student has emotional fluctuation when Sd is larger than 0, and indicating that the emotional fluctuation of the student is more severe when the Sd is higher, wherein S1 is the reference area of the key points of the trunk preset by the system; beta is a proportionality coefficient, and beta is more than 0.
Further, the information input module is used for inputting personal information of students and personal information of teachers, wherein the personal information of the students comprises names of the students, ages of the students, classes of the students, class master and task information of the classes of the students and seat positions of the students in classrooms, and the personal information of the students is uploaded to the server to be stored; the teacher's personal information comprises the name of the teacher, the subject of the teacher, the class of the teacher and the mobile phone number of real-name authentication, the mobile phone number of real-name authentication is bound with the system, and meanwhile, the teacher's personal information is uploaded to the server to be stored.
Furthermore, the login verification module is used for a teacher to log in the system, the teacher directly verifies and logs in through a real-name authenticated mobile phone number bound with the system, a collected number is input into the login verification module, a corresponding login verification code is sent to the mobile phone of the teacher through the information pushing module, and the login verification code is input into the system to log in.
Further, the image acquisition module comprises an image pre-acquisition unit and an image dynamic acquisition unit, the image pre-acquisition unit is used for acquiring standard images of students, and the specific process of the standard image acquisition is as follows:
step S1: collecting a student face picture and a student trunk picture, and respectively establishing a face recognition model and a behavior recognition model for the student face picture and the student trunk picture;
the establishment process of the face recognition model specifically comprises the following steps;
step SS 1: acquiring a face photo of the front of a student;
step SS 2: carrying out gridding processing on the face photos of the students;
step SS 3: carrying out face key point labeling on the face photo subjected to the gridding processing, then overlapping the face photo subjected to the gridding processing with the original face photo to obtain a student face model photo, using the student face model photo as a student face reference model, and uploading the face model photo to a server for storage;
the establishing process of the behavior recognition model specifically comprises the following steps:
step SS 4: acquiring a front trunk photo and a back trunk photo of a student;
step SS 5: carrying out gridding processing on the obtained front trunk picture and the back trunk picture;
step SS 6: respectively carrying out trunk key point labeling on the gridded student front trunk photo and the student back trunk photo, after the trunk key points are labeled on the gridded student front trunk photo and the student back trunk photo, overlapping the gridded student front trunk photo and the original student front trunk photo to obtain a student front trunk model photo; overlapping the student back trunk photo after the gridding treatment with the original student back trunk photo to obtain a student back trunk model photo, taking the student front trunk model photo and the student back trunk model photo as behavior benchmark reference models of students, and uploading the student front trunk model photo and the student back trunk model photo to a server for storage;
step S2: binding the student face model picture, the student front trunk picture and the student back trunk picture obtained in the steps SS1-SS6 with the personal information of the student to form an independent student information data packet;
the image dynamic acquisition unit is used for acquiring dynamic images of students, and the specific acquisition process of the dynamic images is as follows:
step S3: acquiring dynamic face photos of students, recording the number of the acquired dynamic face photos of the students, and marking the number of the dynamic face photos of the students as NL;
step S4: recording the time TL spent on acquiring NL dynamic face photos of students;
step S5: uploading the data obtained in the steps S3-S4 to an expression analysis module;
step S6: acquiring dynamic trunk photos of students, recording the number of the acquired dynamic trunk photos of the students, and marking the number of the dynamic trunk photos of the students as NQ;
step S7: recording the time TQ spent on acquiring the dynamic trunk photos of the NQ students;
step S8: uploading the data obtained in the steps S6-S7 to a behavior analysis module.
Furthermore, the face recognition module is used for recognizing the dynamic face photo of the student acquired by the image dynamic acquisition unit, and matching the dynamic face photo of the student with the personal information of the student by recognizing the dynamic face photo of the student.
Furthermore, the query module is used for a teacher to query the conditions of students, and after the teacher logs in the system through the login verification module, the teacher selects a corresponding time period and student information in the query module, so as to query the emotion change conditions of the students in the selected time period.
Further, the working method of the student emotion analysis system based on the expression images specifically comprises the following steps:
the first step is as follows: the personal information of the students and the personal information of the teachers are input through the information input module, and the personal information of the students and the personal information of the teachers are uploaded into the server to be stored; the teacher directly verifies and logs in through the real-name authenticated mobile phone number bound with the system, after the mobile phone number is input into the login verification module, the corresponding login verification code is sent to the teacher's mobile phone through the information pushing module, and the login verification code is input into the system for logging in;
the second step is that: acquiring face photos and trunk photos of students through an image acquisition module, performing standard image acquisition on the students through an image pre-acquisition unit, acquiring face photos of the students, performing meshing processing on the face photos of the students, and labeling key points of the faces in the face photos after the meshing processing, so as to obtain a face reference model of the students; simultaneously acquiring trunk photos of students, performing gridding processing on the trunk photos of the students, and labeling trunk key points in the trunk photos after the gridding processing, so as to obtain behavior benchmark reference models of the students; acquiring a dynamic face photo of a student and a dynamic trunk photo of the student through an image dynamic acquisition unit, and then respectively uploading the acquired dynamic face photo of the student and the acquired dynamic trunk photo of the student to an expression analysis module and a behavior analysis module;
analyzing the dynamic facial photos of the students through an expression analysis module so as to obtain the distance change between the facial key points and the first standard points of the students, wherein the larger the distance change between the facial key points and the first standard points is, the more unstable the emotion of the students is represented, and the smaller the distance change between the facial key points and the first standard points is, the more stable the emotion of the students is; and simultaneously, analyzing the dynamic trunk photos of the students through the behavior analysis module so as to obtain the distance change between the trunk key point and the second standard point of the students, wherein the larger the distance change between the trunk key point and the second standard point is, the more unstable the emotion of the students is, the smaller the distance change between the trunk key point and the second standard point is, and the more stable the emotion of the students is.
Compared with the prior art, the invention has the beneficial effects that: a student emotion analysis system based on expression images is provided with an expression analysis module and a behavior analysis module, standard image collection is carried out on students through the image collection module, a student face picture and a student trunk picture are obtained, gridding processing is carried out on the student face picture and the student trunk picture, then face key point labeling and trunk key point labeling are respectively carried out on the gridded student face picture and the student trunk picture to form a student face benchmark reference model and a student behavior benchmark reference model, then, the dynamic face picture of the student and the dynamic trunk picture of the student are obtained, gridding processing is carried out on the student dynamic face picture and the student dynamic trunk picture, face key point labeling and trunk key point labeling are respectively carried out, and distance change between all face key points and a first standard point and distance change between the trunk key points and a second standard point are obtained, thereby judging whether the emotion of the student is stable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic block diagram of a student emotion analysis system based on expression images.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a student emotion analysis system based on expression images includes a server, a login verification module, an information entry module, an image acquisition module, a face recognition module, an expression analysis module, a behavior analysis module, a query module, and an information push module;
the information input module is used for inputting personal information of students and personal information of teachers, wherein the personal information of the students comprises names of the students, ages of the students, classes of the students, class master information of the classes of the students and seat positions of the students in classrooms, and the personal information of the students is uploaded to the server to be stored; the teacher's personal information comprises the name of the teacher, the subject of the teacher, the class of the teacher and the mobile phone number of real-name authentication, the mobile phone number of real-name authentication is bound with the system, and meanwhile, the teacher's personal information is uploaded to the server to be stored.
The login verification module is used for a teacher to log in the system, the teacher directly verifies and logs in through a real-name authentication mobile phone number bound with the system, a collected number is input into the login verification module, a corresponding login verification code is sent to the mobile phone of the teacher through the information pushing module, and the login verification code is input into the system to log in.
The image acquisition module comprises an image pre-acquisition unit and an image dynamic acquisition unit, wherein the image pre-acquisition unit is used for acquiring standard images of students, and the specific process of the standard image acquisition is as follows:
step S1: collecting a student face picture and a student trunk picture, and respectively establishing a face recognition model and a behavior recognition model for the student face picture and the student trunk picture;
the establishment process of the face recognition model specifically comprises the following steps;
step SS 1: acquiring a face photo of the front of a student;
step SS 2: carrying out gridding processing on the face photos of the students;
step SS 3: carrying out face key point labeling on the face photo subjected to the gridding processing, then overlapping the face photo subjected to the gridding processing with the original face photo to obtain a student face model photo, using the student face model photo as a student face reference model, and uploading the face model photo to a server for storage;
the establishing process of the behavior recognition model specifically comprises the following steps:
step SS 4: acquiring a front trunk photo and a back trunk photo of a student;
step SS 5: carrying out gridding processing on the obtained front trunk picture and the back trunk picture;
step SS 6: respectively carrying out trunk key point labeling on the gridded student front trunk photo and the student back trunk photo, after the trunk key points are labeled on the gridded student front trunk photo and the student back trunk photo, overlapping the gridded student front trunk photo and the original student front trunk photo to obtain a student front trunk model photo; overlapping the student back trunk photo after the gridding treatment with the original student back trunk photo to obtain a student back trunk model photo, taking the student front trunk model photo and the student back trunk model photo as behavior benchmark reference models of students, and uploading the student front trunk model photo and the student back trunk model photo to a server for storage;
step S2: binding the student face model picture, the student front trunk picture and the student back trunk picture obtained in the steps SS1-SS6 with the personal information of the student to form an independent student information data packet;
the image dynamic acquisition unit is used for acquiring dynamic images of students, and the specific acquisition process of the dynamic images is as follows:
step S3: acquiring dynamic face photos of students, recording the number of the acquired dynamic face photos of the students, and marking the number of the dynamic face photos of the students as NL;
step S4: recording the time TL spent on acquiring NL dynamic face photos of students;
step S5: uploading the data obtained in the steps S3-S4 to an expression analysis module;
step S6: acquiring dynamic trunk photos of students, recording the number of the acquired dynamic trunk photos of the students, and marking the number of the dynamic trunk photos of the students as NQ;
step S7: recording the time TQ spent on acquiring the dynamic trunk photos of the NQ students;
step S8: uploading the data obtained in the steps S6-S7 to a behavior analysis module.
The face recognition module is used for recognizing the dynamic face photo of the student acquired by the image dynamic acquisition unit and matching the dynamic face photo of the student with personal information of the student by recognizing the dynamic face photo of the student;
the expression analysis module is used for analyzing dynamic face photos of students so as to obtain emotion changes of the students, and the specific analysis process comprises the following steps:
step N1: carrying out gridding processing on the dynamic face photos of all students, carrying out face key point labeling on the faces of the students, and simultaneously taking the noses of the students as first standard points;
step N2: comparing the facial key points on the dynamic face photos of the students with the facial key points in the face reference model;
step N3: respectively extracting facial key points in each dynamic face picture, taking the first standard point as a reference, obtaining the distance between each facial key point and the first standard point, obtaining the change of the distance between each facial key point and the first standard point in time TL, and generating a corresponding dynamic coordinate curve graph in the change process, wherein TL is more than 0;
step N4: marking the dynamic area of the dynamic coordinate graph in the TL time as Sm, substituting the Sm into a formula Sc-S0-alpha to obtain a fluctuation value Sc of the key points of the face, when the Sc is larger than 0, judging that the student has emotional fluctuation, wherein the larger the value of the Sc is, the larger the emotional fluctuation of the student is, and S0 is the reference area of the key points of the face preset by the system; alpha is a proportionality coefficient and alpha is more than 0;
the behavior analysis module is used for analyzing the dynamic behavior of the student, and judging the emotion change of the student through the change of the behavior of the student, and the specific process comprises the following steps:
step M1: carrying out gridding processing on the dynamic trunk photos of all students, carrying out trunk key point labeling on the trunk photos of the backs of the students and the trunk photos of the fronts of the students, and simultaneously taking the chest of the students as a second standard point;
step M2: comparing the trunk key points on the dynamic trunk picture of the student with trunk key points in the behavior benchmark reference model;
step M3: extracting trunk key points in each dynamic trunk picture, acquiring the distance between each trunk key point and the second standard point, acquiring the change of the distance between each trunk key point and the second standard point in time TQ, and generating a corresponding dynamic coordinate curve graph according to the change, wherein the TQ is more than 0;
step M4: marking the dynamic area of the dynamic coordinate curve graph in TL time as Sq, substituting the Sq into a formula Sd ═ Sq-S1 ×. beta to obtain a fluctuation value Sd of the key points of the trunk, judging that the student has emotional fluctuation when Sd is larger than 0, and indicating that the emotional fluctuation of the student is more severe when the Sd is higher, wherein S1 is the reference area of the key points of the trunk preset by the system; beta is a proportionality coefficient, and beta is more than 0.
The inquiry module is used for inquiring the condition of the student by a teacher, and after the teacher logs in the system through the login verification module, the teacher selects a corresponding time period and student information in the inquiry module so as to inquire the emotion change condition of the student in the selected time period.
The working method of the student emotion analysis system based on the expression images specifically comprises the following steps:
the first step is as follows: the personal information of the students and the personal information of the teachers are input through the information input module, and the personal information of the students and the personal information of the teachers are uploaded into the server to be stored; the teacher directly verifies and logs in through the real-name authenticated mobile phone number bound with the system, after the mobile phone number is input into the login verification module, the corresponding login verification code is sent to the teacher's mobile phone through the information pushing module, and the login verification code is input into the system for logging in;
the second step is that: acquiring face photos and trunk photos of students through an image acquisition module, performing standard image acquisition on the students through an image pre-acquisition unit, acquiring face photos of the students, performing meshing processing on the face photos of the students, and labeling key points of the faces in the face photos after the meshing processing, so as to obtain a face reference model of the students; simultaneously acquiring trunk photos of students, performing gridding processing on the trunk photos of the students, and labeling trunk key points in the trunk photos after the gridding processing, so as to obtain behavior benchmark reference models of the students; acquiring a dynamic face photo of a student and a dynamic trunk photo of the student through an image dynamic acquisition unit, and then respectively uploading the acquired dynamic face photo of the student and the acquired dynamic trunk photo of the student to an expression analysis module and a behavior analysis module;
the third step: analyzing the dynamic facial photos of the students through an expression analysis module so as to obtain the distance change between the facial key points and the first standard points of the students, wherein the larger the distance change between the facial key points and the first standard points is, the more unstable the emotion of the students is represented, and the smaller the distance change between the facial key points and the first standard points is, the more stable the emotion of the students is; and simultaneously, analyzing the dynamic trunk photos of the students through the behavior analysis module so as to obtain the distance change between the trunk key point and the second standard point of the students, wherein the larger the distance change between the trunk key point and the second standard point is, the more unstable the emotion of the students is, the smaller the distance change between the trunk key point and the second standard point is, and the more stable the emotion of the students is.
The above formulas are all calculated by removing dimensions and taking numerical values thereof, the formula is a formula which is obtained by acquiring a large amount of data and performing software simulation to obtain the closest real situation, and the preset parameters and the preset threshold value in the formula are set by the technical personnel in the field according to the actual situation or obtained by simulating a large amount of data.
The working principle is as follows: the personal information of the students and the personal information of the teachers are input through the information input module, and the personal information of the students and the personal information of the teachers are uploaded into the server to be stored; the teacher directly verifies and logs in through the real-name authenticated mobile phone number bound with the system, after the mobile phone number is input into the login verification module, the corresponding login verification code is sent to the teacher's mobile phone through the information pushing module, and the login verification code is input into the system for logging in; acquiring face photos and trunk photos of students through an image acquisition module, performing standard image acquisition on the students through an image pre-acquisition unit, acquiring face photos of the students, performing meshing processing on the face photos of the students, and labeling key points of the faces in the face photos after the meshing processing, so as to obtain a face reference model of the students; simultaneously acquiring trunk photos of students, performing gridding processing on the trunk photos of the students, and labeling trunk key points in the trunk photos after the gridding processing, so as to obtain behavior benchmark reference models of the students; acquiring a dynamic face photo of a student and a dynamic trunk photo of the student through an image dynamic acquisition unit, and then respectively uploading the acquired dynamic face photo of the student and the acquired dynamic trunk photo of the student to an expression analysis module and a behavior analysis module; analyzing the dynamic face photos of the students through an expression analysis module so as to obtain the distance change between the key points of the faces of the students and the first standard points, wherein the larger the distance change between the key points of the faces and the first standard points is, the more unstable the emotions of the students are, and the smaller the distance change between the key points of the faces and the first standard points is, the more stable the emotions of the students are; meanwhile, the dynamic trunk photos of the students are analyzed through the behavior analysis module, so that the distance change between the trunk key points and the second standard points of the students is obtained, the larger the distance change between the trunk key points and the second standard points is, the more unstable the emotion of the students is, the smaller the distance change between the trunk key points and the second standard points is, the more stable the emotion of the students is, and a teacher can inquire the emotion change condition of the students in the time period after inputting the time period and the information of the students through the login system.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (6)

1. A student emotion analysis system based on expression images is characterized by comprising a server, a login verification module, an information input module, an image acquisition module, a face recognition module, an expression analysis module, a behavior analysis module, a query module and an information push module;
the expression analysis module is used for analyzing dynamic face photos of students so as to obtain emotion changes of the students, and the specific analysis process comprises the following steps:
step N1: carrying out gridding processing on the dynamic face photos of all students, carrying out face key point labeling on the faces of the students, and simultaneously taking the noses of the students as first standard points;
step N2: comparing the facial key points on the dynamic face photos of the students with the facial key points in the face reference model;
step N3: respectively extracting facial key points in each dynamic face picture, taking the first standard point as a reference, obtaining the distance between each facial key point and the first standard point, obtaining the change of the distance between each facial key point and the first standard point in time TL, and generating a corresponding dynamic coordinate curve graph in the change process, wherein TL is more than 0;
step N4: marking the dynamic area of the dynamic coordinate graph in the TL time as Sm, substituting the Sm into a formula Sc-S0-alpha to obtain a fluctuation value Sc of the key points of the face, when the Sc is larger than 0, judging that the student has emotional fluctuation, wherein the larger the value of the Sc is, the larger the emotional fluctuation of the student is, and S0 is the reference area of the key points of the face preset by the system; alpha is a proportionality coefficient and alpha is more than 0;
the behavior analysis module is used for analyzing the dynamic behavior of the student, and judging the emotion change of the student through the change of the behavior of the student, and the specific process comprises the following steps:
step M1: carrying out gridding processing on the dynamic trunk photos of all students, carrying out trunk key point labeling on the trunk photos of the backs of the students and the trunk photos of the fronts of the students, and simultaneously taking the chest of the students as a second standard point;
step M2: comparing the trunk key points on the dynamic trunk picture of the student with trunk key points in the behavior benchmark reference model;
step M3: extracting trunk key points in each dynamic trunk picture, acquiring the distance between each trunk key point and the second standard point, acquiring the change of the distance between each trunk key point and the second standard point in time TQ, and generating a corresponding dynamic coordinate curve graph according to the change, wherein the TQ is more than 0;
step M4: marking the dynamic area of the dynamic coordinate curve graph in TL time as Sq, substituting the Sq into a formula Sd ═ Sq-S1 ×. beta to obtain a fluctuation value Sd of the key points of the trunk, judging that the student has emotional fluctuation when Sd is larger than 0, and indicating that the emotional fluctuation of the student is more severe when the Sd is higher, wherein S1 is the reference area of the key points of the trunk preset by the system; beta is a proportionality coefficient, and beta is more than 0.
2. The system for analyzing the emotion of a student based on an expression image, as claimed in claim 1, wherein the information entry module is configured to enter personal information of the student and personal information of a teacher, the personal information of the student includes a name of the student, an age of the student, a class of the student, class master information of the class of the student, and a seat position of the student in a classroom, and upload the personal information of the student to the server for storage; the teacher's personal information comprises the name of the teacher, the subject of the teacher, the class of the teacher and the mobile phone number of real-name authentication, the mobile phone number of real-name authentication is bound with the system, and meanwhile, the teacher's personal information is uploaded to the server to be stored.
3. The system of claim 1, wherein the login verification module is used for a teacher to log in the system, the teacher directly verifies and logs in through a real-name authenticated mobile phone number bound with the system, and after inputting a collected number into the login verification module, the teacher sends a corresponding login verification code to the mobile phone of the teacher through the information pushing module, and the teacher logs in through inputting the login verification code into the system.
4. The student emotion analysis system based on expression images as claimed in claim 1, wherein the image acquisition module comprises an image pre-acquisition unit and an image dynamic acquisition unit, the image pre-acquisition unit is used for standard image acquisition for students, and the specific process of standard image acquisition is as follows:
step S1: collecting a student face picture and a student trunk picture, and respectively establishing a face recognition model and a behavior recognition model for the student face picture and the student trunk picture;
the establishment process of the face recognition model specifically comprises the following steps:
step SS 1: acquiring a face photo of the front of a student;
step SS 2: carrying out gridding processing on the face photos of the students;
step SS 3: carrying out face key point labeling on the face photo subjected to the gridding processing, then overlapping the face photo subjected to the gridding processing with the original face photo to obtain a student face model photo, using the student face model photo as a student face reference model, and uploading the face model photo to a server for storage;
the establishing process of the behavior recognition model specifically comprises the following steps:
step SS 4: acquiring a front trunk photo and a back trunk photo of a student;
step SS 5: carrying out gridding processing on the obtained front trunk picture and the back trunk picture;
step SS 6: respectively carrying out trunk key point labeling on the gridded student front trunk photo and the student back trunk photo, after the trunk key points are labeled on the gridded student front trunk photo and the student back trunk photo, overlapping the gridded student front trunk photo and the original student front trunk photo to obtain a student front trunk model photo; overlapping the student back trunk photo after the gridding treatment with the original student back trunk photo to obtain a student back trunk model photo, taking the student front trunk model photo and the student back trunk model photo as behavior benchmark reference models of students, and uploading the student front trunk model photo and the student back trunk model photo to a server for storage;
step S2: binding the student face model picture, the student front trunk picture and the student back trunk picture obtained in the steps SS1-SS6 with the personal information of the student to form an independent student information data packet;
the image dynamic acquisition unit is used for acquiring dynamic images of students, and the specific acquisition process of the dynamic images is as follows:
step S3: acquiring dynamic face photos of students, recording the number of the acquired dynamic face photos of the students, and marking the number of the dynamic face photos of the students as NL;
step S4: recording the time TL spent on acquiring NL dynamic face photos of students;
step S5: uploading the data obtained in the steps S3-S4 to an expression analysis module;
step S6: acquiring dynamic trunk photos of students, recording the number of the acquired dynamic trunk photos of the students, and marking the number of the dynamic trunk photos of the students as NQ;
step S7: recording the time TQ spent on acquiring the dynamic trunk photos of the NQ students;
step S8: uploading the data obtained in the steps S6-S7 to a behavior analysis module.
5. The system as claimed in claim 1, wherein the face recognition module is configured to recognize the dynamic facial photo of the student obtained by the dynamic image acquisition unit, and match the dynamic facial photo of the student with the personal information of the student by recognizing the dynamic facial photo of the student.
6. The system as claimed in claim 1, wherein the query module is used for a teacher to query the condition of the student, and after the teacher logs in the system through the login verification module, the teacher selects a corresponding time slot and student information in the query module, and further queries the emotion change condition of the student in the selected time slot.
CN202110308529.XA 2021-03-23 2021-03-23 Student emotion analysis system based on expression images Active CN112990030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110308529.XA CN112990030B (en) 2021-03-23 2021-03-23 Student emotion analysis system based on expression images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110308529.XA CN112990030B (en) 2021-03-23 2021-03-23 Student emotion analysis system based on expression images

Publications (2)

Publication Number Publication Date
CN112990030A true CN112990030A (en) 2021-06-18
CN112990030B CN112990030B (en) 2024-06-14

Family

ID=76333258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110308529.XA Active CN112990030B (en) 2021-03-23 2021-03-23 Student emotion analysis system based on expression images

Country Status (1)

Country Link
CN (1) CN112990030B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971090A (en) * 2007-09-28 2014-08-06 富士胶片株式会社 Image processing apparatus, image capturing apparatus, image processing method and recording medium
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN111414839A (en) * 2020-03-16 2020-07-14 清华大学 Emotion recognition method and device based on gestures
CN111523444A (en) * 2020-04-21 2020-08-11 南通大学 Classroom behavior detection method based on improved Openpos model and facial micro-expressions
CN111950486A (en) * 2020-08-18 2020-11-17 四川创客知佳科技有限公司 Teaching video processing method based on cloud computing
CN112200138A (en) * 2020-10-30 2021-01-08 福州大学 Classroom learning situation analysis method based on computer vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971090A (en) * 2007-09-28 2014-08-06 富士胶片株式会社 Image processing apparatus, image capturing apparatus, image processing method and recording medium
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN111414839A (en) * 2020-03-16 2020-07-14 清华大学 Emotion recognition method and device based on gestures
CN111523444A (en) * 2020-04-21 2020-08-11 南通大学 Classroom behavior detection method based on improved Openpos model and facial micro-expressions
CN111950486A (en) * 2020-08-18 2020-11-17 四川创客知佳科技有限公司 Teaching video processing method based on cloud computing
CN112200138A (en) * 2020-10-30 2021-01-08 福州大学 Classroom learning situation analysis method based on computer vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ASIT BARMAN ETC.: "Facial expression recognition using distance and texture signature relevant features", APPLIED SOFT COMPUTING JOURNAL *
任婕: "基于机器视觉的学生专注度综合评价研究", 中国优秀硕士学位论文全文数据库社会科学Ⅱ辑, pages 1 - 2 *
许良凤;王家勇;崔婧楠;胡敏;张柯柯;滕文娣;: "基于动态时间规整和主动外观模型的动态表情识别", 电子与信息学报, no. 02 *

Also Published As

Publication number Publication date
CN112990030B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN109360550B (en) Testing method, device, equipment and storage medium of voice interaction system
CN109408821B (en) Corpus generation method and device, computing equipment and storage medium
CN110415569B (en) Campus classroom sharing education method and system
KR102415103B1 (en) A device that analyzes psychology based on an artificial intelligence model generated using voice data and text data
CN110580516B (en) Interaction method and device based on intelligent robot
CN108764149B (en) Training method for class student face model
CN110264243A (en) Product promotion method, apparatus, equipment and storage medium based on In vivo detection
CN110276587A (en) The method, apparatus of project examination calculates equipment and computer readable storage medium
CN110852271A (en) Micro-expression recognition method based on peak frame and deep forest
CN113269903A (en) Face recognition class attendance system
CN114398909A (en) Question generation method, device, equipment and storage medium for dialogue training
CN113705792A (en) Personalized recommendation method, device, equipment and medium based on deep learning model
CN112734966A (en) Classroom roll call method integrating WiFi data and face recognition
CN112990030B (en) Student emotion analysis system based on expression images
CN115905187A (en) Intelligent proposition system for cloud computing engineering technician authentication
CN110110280B (en) Curve integral calculation method, device and equipment for coordinates and storage medium
CN111339939B (en) Attendance checking method and device based on image recognition
CN112307186A (en) Question-answering service method, system, terminal device and medium based on emotion recognition
CN112182147A (en) Extensible intelligent question-answering method and system
CN116452072B (en) Teaching evaluation method, system, equipment and readable storage medium
CN117408679B (en) Operation and maintenance scene information processing method and device
CN103020224A (en) Method and device of intelligent search
CN113642336B (en) SaaS-based insurance automatic question-answering method and system
CN116414970A (en) Data processing method, apparatus, program product, computer device, and medium
Zhang et al. Design and implementation of face recognition attendance management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240519

Address after: Unit 07, 05/F, Unit 3, Building 2, Yunsheng Science Park Property, No. 11 Spectral Middle Road, Huangpu District, Guangzhou City, Guangdong Province, 510000

Applicant after: Guangdong Lingjun Technology Co.,Ltd.

Country or region after: China

Address before: Room 1-105, Wanbo Beiyuan, Yizheng, Yangzhou, Jiangsu, 211999

Applicant before: Fan Yifei

Country or region before: China

GR01 Patent grant
GR01 Patent grant