CN112907408A - Method, device, medium and electronic equipment for evaluating learning effect of students - Google Patents

Method, device, medium and electronic equipment for evaluating learning effect of students Download PDF

Info

Publication number
CN112907408A
CN112907408A CN202110224182.0A CN202110224182A CN112907408A CN 112907408 A CN112907408 A CN 112907408A CN 202110224182 A CN202110224182 A CN 202110224182A CN 112907408 A CN112907408 A CN 112907408A
Authority
CN
China
Prior art keywords
information
preset
type
mouth
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110224182.0A
Other languages
Chinese (zh)
Inventor
王珂晟
黄劲
黄钢
许巧龄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anbo Chuangying Education Technology Co ltd
Original Assignee
Beijing Anbo Chuangying Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anbo Chuangying Education Technology Co ltd filed Critical Beijing Anbo Chuangying Education Technology Co ltd
Priority to CN202110224182.0A priority Critical patent/CN112907408A/en
Publication of CN112907408A publication Critical patent/CN112907408A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Educational Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a method, a device, a medium and electronic equipment for evaluating learning effect of students. According to the method, detection characteristic information is obtained through a preset detection type and a face image in a screen of a student, then sample form information corresponding to the detection characteristic information is obtained through similarity matching, and then a corresponding preset evaluation value is obtained through the matched sample form information. The teaching teacher can obtain the learning effect of each student in real time, the organic combination of teaching and effect in a live-broadcast teaching classroom is realized, and the teaching efficiency is improved.

Description

Method, device, medium and electronic equipment for evaluating learning effect of students
Technical Field
The disclosure relates to the field of live broadcast teaching, in particular to a method, a device, a medium and electronic equipment for evaluating learning effects of students.
Background
Teaching interaction is an important teaching means. The teaching process is regarded as a dynamically developed interactive influence and interactive activity process with unified teaching and learning. In the teaching process, teaching resonance is generated by adjusting the relation between teachers and students and the interaction of the teachers and the students, and a teaching means for improving the teaching effect is achieved.
With the development of computer technology, live broadcasting teaching based on the internet starts to rise, and a panoramic intelligent blackboard combined with a multimedia technology also comes along with live broadcasting teaching. In the panoramic intelligent blackboard, a plurality of functional display areas are included, and each functional display area is used for displaying the same or different contents. For example, as shown in fig. 1, in the left one-third area of the panoramic intelligent blackboard, the whole body image display area is provided, the middle part is the teaching content display area, the right one-third area is the interactive area, and the upper part is the head portrait display area of the students participating in the live teaching. Whole panorama intelligence blackboard not only has the function display area, and the whole region of panorama intelligence blackboard can both regard as the blackboard moreover, writes on its surface. The contents displayed on the board surface of the panoramic intelligent blackboard can be displayed on a teaching teacher side and also can be displayed on a student side participating in remote teaching. The character image and the teaching content in the live classroom are tightly combined, participants in live teaching can overcome distance feeling, the scene feeling is enhanced, and the teaching interest is improved.
However, the live-broadcast teaching classroom cannot be as same as the online classroom, and the teaching teacher can feed back the teaching effect through the expression of the students, especially when facing a large number of students. The teaching and effect are completely cut and cracked, and the teaching effect of live broadcasting teaching is seriously influenced.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The present disclosure is directed to a method, an apparatus, a medium, and an electronic device for evaluating a learning effect of a student, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific embodiment of the present disclosure, in a first aspect, the present disclosure provides a method for evaluating learning effect of a student, including:
acquiring a student video transmitted by a terminal logging in a live teaching classroom, wherein a frame image of the student video comprises a face image of a student;
acquiring detection feature information based on a preset detection type and the face image;
respectively carrying out similarity matching on the detection characteristic information and various pre-stored sample form information of the students to obtain similarity matching values of the sample form information;
judging whether the similarity matching value meets a preset similarity threshold value or not;
and if so, acquiring a preset evaluation value corresponding to the sample form information of the similarity matching value.
According to a second aspect, the present disclosure provides an apparatus for evaluating learning effect of a student, including:
the system comprises an image acquisition unit, a video acquisition unit and a video display unit, wherein the image acquisition unit is used for acquiring a student video transmitted by a terminal for logging in a live teaching classroom, and a frame image of the student video comprises a face image of a student;
an acquisition feature unit configured to acquire detection feature information based on a preset detection type and the face image;
the matching unit is used for respectively carrying out similarity matching on the detection characteristic information and various pre-stored sample form information of the students to obtain similarity matching values of the sample form information;
the judging unit is used for judging whether the similarity matching value meets a preset similarity threshold value or not;
and if so, acquiring a preset evaluation value corresponding to the sample form information of the similarity matching value.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of assessing the learning effect of a student according to any one of the first aspects.
According to a fourth aspect thereof, the present disclosure provides an electronic device, comprising: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of evaluating student learning effects as described in any one of the first aspects.
Compared with the prior art, the scheme of the embodiment of the disclosure at least has the following beneficial effects:
the disclosure provides a method, a device, a medium and electronic equipment for evaluating learning effect of students. According to the method, detection characteristic information is obtained through a preset detection type and a face image in a screen of a student, then sample form information corresponding to the detection characteristic information is obtained through similarity matching, and then a corresponding preset evaluation value is obtained through the matched sample form information. The teaching teacher can obtain the learning effect of each student in real time, the organic combination of teaching and effect in a live-broadcast teaching classroom is realized, and the teaching efficiency is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale. In the drawings:
FIG. 1 shows a panoramic intelligent blackboard schematic;
FIG. 2 shows a flow chart of a method of evaluating student learning effectiveness in accordance with an embodiment of the disclosure;
fig. 3 illustrates ocular feature information according to an embodiment of the present disclosure;
FIG. 4 illustrates mouth feature information according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram showing the learning effect of students in a panoramic intelligent blackboard;
FIG. 6 shows a block diagram of elements of an apparatus for evaluating student learning effectiveness, according to an embodiment of the present disclosure;
fig. 7 shows an electronic device connection structure schematic according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The first embodiment provided for by the present disclosure, namely, an embodiment of a method for evaluating the learning effect of a student.
The embodiments of the present disclosure are described in detail below with reference to fig. 2.
Step S201, obtaining student videos transmitted by terminals logging in a live teaching classroom.
In live teaching, each teaching teacher can establish a dedicated live teaching classroom in a curriculum-opening mode. In the live broadcast teaching classroom, each student logs in the live broadcast teaching classroom through a remote student terminal to complete the attendance work of listening classes. According to the embodiment of the method and the device, after the students log in, the videos of the students are collected through the cameras of the student terminals, and the videos of the students are transmitted to the head portrait display area of the panoramic intelligent blackboard in the teacher terminal to be displayed. The frame images of the student video comprise face images of students. In a practical application scenario, the absence of a face image of a student in a frame image of a student video is not considered in the method of the present embodiment.
Step S202, detecting characteristic information is obtained based on a preset detection type and the face image.
The learning effect is divided into multiple types, the preset detection type is one of the types, and the learning effect of the student can be reflected in an auxiliary mode from one side face through the evaluation of the preset detection type on the learning effect of the student. For example, the preset detection type includes a preset concentration type or a preset understanding type, the preset concentration type is used for detecting the concentration of the student in a live teaching classroom, and the preset understanding type is used for detecting the understanding of the student in the live teaching classroom.
The detection feature information is associated with preset detection types, and the detection feature information acquired from the face image may be different and the finally obtained evaluation result may be different, if the preset detection types are different. The learning effect of students can be evaluated in all directions by integrating evaluation results of various preset detection types.
In one embodiment, the acquiring of the detection feature information based on the preset detection type and the face image includes:
in step S202-1, facial organ information is extracted from the facial image.
The facial organ information includes binocular information, eyebrow information, binaural information, mouth information, and nose information in the face image, and facial expression information associated with the above organs.
The present embodiment does not describe in detail the process of extracting facial organ information from the facial image, and can be implemented by referring to various implementations in the prior art.
Step S202-2, extracting the detection feature information from the facial organ information according to a preset detection type.
The facial organ information includes detection feature information. And detecting the characteristic information to evaluate the learning effect of the students. The preset detection type at least comprises detection characteristic information. The preset detection type corresponds to the detection characteristic information. Different preset detection types may extract different detection feature information from the facial organ information.
In a specific application, the preset detection type includes a preset concentration type or a preset understanding type.
The extracting the detection feature information from the facial organ information according to a preset detection type includes the following steps:
and extracting eye feature information and mouth feature information included in the detection feature information from the facial organ information according to a preset concentration degree type or a preset comprehension degree type.
The eye feature information and the mouth feature information are detection feature information of a preset concentration type or a preset understanding type.
In a specific embodiment, as shown in fig. 3, the extracting the eye feature information included in the detected feature information from the facial organ information according to a preset concentration type or a preset understanding type includes the following steps:
and step S202-2a-1, acquiring original eye information according to a preset concentration degree type or a preset comprehension degree type and the facial organ information.
The present embodiment does not describe in detail the process of obtaining original eye information from facial organ information, and can be implemented by referring to various implementations in the prior art.
Step S202-2a-2, adjusting the original eye information based on a preset contrast value, a preset saturation value and/or a preset hue value associated with the sample form information, and acquiring final eye information.
The sample shape information is various shape information of student detection feature information collected in advance, and each sample shape information is associated with a preset evaluation value.
Since the light is reflected in the eye, a color that is clearly distinguishable from the rest of the eye is generated in the eye image, and the highlight region size can be determined by the distinguishing color. In the preset concentration degree type or the preset understanding degree type, the size of the highlight area of the eyeball of the student is an important reference for evaluating the learning effect. However, the environment of the student in front of the student terminal is different, and the contrast value, saturation value and/or hue value of each original eye information are also different, and these values all have certain influence on accurately acquiring the highlight area of the student eyeball. In order to match the similarity between the collected information and the sample form information under the same contrast value, saturation value and/or hue value, the preset contrast value, the preset saturation value and the preset hue value are all from the sample form information, and the original eye information is adjusted through the preset contrast value, the preset saturation value and/or the preset hue value, so that the similarity matching is more accurate.
And step S202-2a-3, extracting eyelid contours and highlight contours in eyeballs included in the eye feature information from the final eye information.
The outline refers to a peripheral line.
The high light profile in the eyeball is generated by the curvature of the eyeball under the reflection of light, the high light profile is different according to the curvature, and the curvature of the eyeball determines the focal length of the eyeball, namely, where the attention of the human eye stays. The greater the curvature of the eyeball, the closer the eye is seen, and the smaller the highlight profile. When students participating in live broadcast teaching in front of the student terminals, highlight outlines in eyeballs of the students become smaller than normal outlines, and the concentration degree or the understanding degree is changed. A smaller highlight profile indicates a lower concentration or understanding than a normal profile. So that the concentration or comprehension of the student can be analyzed by the area of the highlight profile in the eyeball.
In another specific embodiment, as shown in fig. 4, the extracting mouth feature information included in the detected feature information from the facial organ information according to a preset concentration type or a preset understanding type includes the following steps:
and step S202-2b-1, acquiring original mouth information according to a preset concentration type or a preset comprehension type and the facial organ information.
And S202-2b-2, extracting the outer contour and the inner contour of the lip included in the mouth feature information from the mouth information.
And S202-2b-3, analyzing the upper contour and the lower contour of the outer contour, and determining the intersection point position of the upper contour and the lower contour as the mouth angle position.
For example, the opening direction perpendicular to the mouth is determined as the scanning direction; scanning pixels of the upper contour and pixels of the lower contour along the scanning direction; and when the pixel of the upper contour and the pixel of the lower contour are at the same or adjacent pixel positions, determining the pixel position as the mouth corner position.
Step S202-2b-4, extracting pixels which meet a preset RGB threshold value and are adjacent to each other from the mouth information outside the outer contour as labial sulcus information included in the mouth feature information.
Wherein the mutually adjacent pixels are adjacent to the pixels at the mouth corner position.
Lip side ditch information is also shadow information formed at the corners of the mouth. In the embodiment, the color of the labial sulcus information in the mouth image, that is, the color of the shadow information, is defined by the preset RGB threshold, the labial sulcus information outside the outer contour is found by the color, and the labial sulcus information is connected with the mouth corner position on the image. The shape of the labial sulcus can reflect the concentration or understanding of students.
In order to improve the accuracy of the evaluation, the embodiment of the present disclosure further provides a further optimization of step S202-2. After the extracting the detection feature information from the facial organ information according to a preset detection type, the method further comprises the following steps:
judging whether the facial organ information comprises binocular information, eyebrow information, mouth information and nose information;
and if not, acquiring the next face image.
It is understood that, in the face organ information, if there is no complete information of the both-eye information, the both-eyebrow information, the mouth information, and the nose information, indicating that the face image is not a frontal image of the student, the next face image is acquired and evaluated.
And step S203, respectively carrying out similarity matching on the detection characteristic information and the various pre-stored sample form information of the students, and acquiring a similarity matching value of each sample form information.
The sample shape information is various shape information of student detection feature information collected in advance, and each sample shape information is associated with a preset evaluation value. For example, the sample shape information of the labial sulcus includes an upturned shape of the mouth angle and a downturned shape of the mouth angle, the preset evaluation value is 8 points when the lip is upturned by 30 degrees, 7 points when the lip is upturned by 20 degrees, 6 points when the lip is upturned by 10 degrees, 5 points when the lip is upturned by 0 degrees, 5 points when the lip is downturned by 0 degrees, 4 points when the lip is downturned by 10 degrees, 3 points when the lip is downturned by 20 degrees, and 2 points when the lip is downturned by 30 degrees; the higher the preset evaluation value is, the higher the concentration/understanding degree is indicated. Each sample morphology information has the same preset contrast value, preset saturation value and/or preset hue value as described above.
Optionally, the similarity matching value refers to a coincidence degree of the detection feature information and the sample morphology information, and is expressed as a percentage of coincidence.
And step S204, judging whether the similarity matching value meets a preset similarity threshold value.
Because each piece of detection characteristic information cannot be identical to the sample form information, in order to improve the evaluation effectiveness, the embodiment of the disclosure provides a preset similarity threshold, and when the similarity matching value between the detection characteristic information and the sample form information meets the preset similarity threshold, it is determined that the detection characteristic information is similar to or identical to the sample form information. For example, if the similarity matching value of the detected feature information with the first sample form information is 92%, the similarity matching value with the second sample form information is 83%, and the preset similarity threshold value is 90%, it is determined that the detected feature information is similar to or identical to the first sample form information. The larger the number of the sample form information is, the larger the preset similarity threshold value is, so as to improve the recognition rate of the form information.
Step S205, if yes, acquiring a preset evaluation value corresponding to the sample form information of the similarity matching value.
The preset evaluation value is an empirical value for evaluating the concentration or understanding degree of the sample morphological information. The preset evaluation value is obtained by experiment.
It is understood that the detection feature information has the same form as the sample form information, and the sample form information corresponds to the preset evaluation value, that is, the evaluation value of the detection feature information.
Optionally, after the preset evaluation value is obtained from the detected feature information, the method further includes the following steps: and generating a learning effect value based on the periodically acquired preset evaluation value within a preset time.
If each frame of image is evaluated, it is too frequent. The method is neither in accordance with the requirements of system operation nor accurate. Therefore, the embodiment of the disclosure detects a face image periodically collected during a period of time, obtains a plurality of evaluation values, and then comprehensively evaluates the plurality of evaluation values (e.g., averages the plurality of evaluation values), thereby generating a learning effect value.
As shown in fig. 5, the preset evaluation value of each student attending lecture is displayed on a screen placed in front of the lecture teaching teacher in Bar form, and the lecture teaching teacher can master the lecture effect of the current student attending lecture in real time through Bar and continuously adjust the teaching strategy (including lecture content, lecture rhythm, interactive skill, etc.) according to the lecture effect. If Bar is ranked (e.g., 1-10), a higher rank indicates a higher concentration/understanding.
According to the embodiment of the invention, the detection characteristic information is acquired through the preset detection type and the face image in the screen of the student, then the sample form information corresponding to the detection characteristic information is acquired through similarity matching, and then the corresponding preset evaluation value is acquired through the matched sample form information. The teaching teacher can obtain the learning effect of each student in real time, the organic combination of teaching and effect in a live-broadcast teaching classroom is realized, and the teaching efficiency is improved.
The present disclosure also provides a second embodiment, namely an apparatus for evaluating the learning effect of students, corresponding to the first embodiment provided by the present disclosure. Since the second embodiment is basically similar to the first embodiment, the description is simple, and the relevant portions should be referred to the corresponding description of the first embodiment. The device embodiments described below are merely illustrative.
Fig. 6 shows an embodiment of an apparatus for evaluating the learning effect of students provided by the present disclosure.
As shown in fig. 6, the present disclosure provides an apparatus for evaluating learning effect of students, comprising:
an image obtaining unit 601, configured to obtain a student video transmitted by a terminal logging in a live teaching classroom, where a frame image of the student video includes a face image of a student;
an acquisition feature unit 602 configured to acquire detection feature information based on a preset detection type and the face image;
a matching unit 603, configured to perform similarity matching on the detection feature information and multiple types of sample form information of the pre-stored student, respectively, and obtain a similarity matching value of each sample form information;
a determining unit 604, configured to determine whether the similarity matching value meets a preset similarity threshold;
an evaluation unit 605, configured to, if yes, obtain a preset evaluation value corresponding to the sample form information of the similarity matching value.
Optionally, the feature obtaining unit 602 includes:
an extraction subunit operable to extract facial organ information from the face image;
a first acquisition feature subunit operable to acquire the detection feature information based on a preset detection type and the facial organ information.
Optionally, the feature obtaining unit 602 further includes:
a judging subunit, configured to judge whether the facial organ information includes binocular information, eyebrow information, mouth information, and nose information after the detection feature information is extracted from the facial organ information according to a preset detection type;
and the returning subunit is used for acquiring the next face image if the face image is not the next face image.
Optionally, the preset detection type includes a preset concentration type or a preset understanding type;
in the first acquisition feature subunit, the method includes:
and the second acquisition characteristic subunit is used for extracting the eye characteristic information and the mouth characteristic information included in the detection characteristic information from the facial organ information according to a preset concentration degree type or a preset comprehension degree type.
Optionally, in the second obtaining feature subunit, the method includes:
the eye information acquisition subunit is used for acquiring original eye information according to a preset concentration degree type or a preset comprehension degree type and the facial organ information;
an adjusting subunit, configured to adjust the original eye information based on a preset contrast value, a preset saturation value, and/or a preset hue value associated with the sample morphology information, so as to obtain final eye information;
and the eye feature extracting subunit is used for extracting an eyelid contour and a highlight contour in the eyeball from the final eye information.
Optionally, in the second obtaining feature subunit, the method includes:
the mouth information acquisition subunit is used for acquiring original mouth information according to a preset concentration type or a preset comprehension type and the facial organ information;
an extract lip outline subunit configured to extract, from the mouth information, an outer outline and an inner outline of a lip included in the mouth feature information;
the mouth angle determining subunit is used for analyzing an upper contour and a lower contour of the outer contour and determining the intersection point position of the upper contour and the lower contour as a mouth angle position;
an extraction labial sulcus subunit configured to extract, from the mouth information outside the outer contour, pixels adjacent to each other that satisfy a preset RGB threshold as labial sulcus information included in the mouth feature information, wherein the pixels adjacent to each other are adjacent to a pixel of the mouth corner position.
Optionally, in the mouth corner determining subunit, the method includes:
a direction determining subunit, configured to determine an opening direction perpendicular to the mouth as a scanning direction;
a scanning subunit, configured to scan the pixels of the upper outline and the pixels of the lower outline along the scanning direction;
and the judging subunit is used for determining the pixel position as the mouth corner position when the pixel of the upper outline and the pixel of the lower outline are at the same or adjacent pixel positions.
According to the embodiment of the invention, the detection characteristic information is acquired through the preset detection type and the face image in the screen of the student, then the sample form information corresponding to the detection characteristic information is acquired through similarity matching, and then the corresponding preset evaluation value is acquired through the matched sample form information. The teaching teacher can obtain the learning effect of each student in real time, the organic combination of teaching and effect in a live-broadcast teaching classroom is realized, and the teaching efficiency is improved.
The present disclosure provides a third embodiment, namely, an electronic device, which is a method for evaluating learning effect of students, and includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the method of evaluating student learning effectiveness as described in the first embodiment.
The disclosed embodiments provide a fourth embodiment, which is a computer storage medium for evaluating the learning effect of students, the computer storage medium storing computer-executable instructions, the computer-executable instructions being capable of executing the method for evaluating the learning effect of students as described in the first embodiment.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device may include a processing device (e.g., central processing unit, graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage device 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication device 709 may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method for evaluating learning effectiveness of a student, comprising:
acquiring a student video transmitted by a terminal logging in a live teaching classroom, wherein a frame image of the student video comprises a face image of a student;
acquiring detection feature information based on a preset detection type and the face image;
respectively carrying out similarity matching on the detection characteristic information and various pre-stored sample form information of the students to obtain similarity matching values of the sample form information;
judging whether the similarity matching value meets a preset similarity threshold value or not;
and if so, acquiring a preset evaluation value corresponding to the sample form information of the similarity matching value.
2. The method according to claim 1, wherein the acquiring detection feature information based on a preset detection type and the face image includes:
extracting facial organ information from the facial image;
and extracting the detection characteristic information from the facial organ information according to a preset detection type.
3. The method according to claim 2, further comprising, after the extracting the detection feature information from the facial organ information according to a preset detection type:
judging whether the facial organ information comprises binocular information, eyebrow information, mouth information and nose information;
and if not, acquiring the next face image.
4. The method of claim 2,
the preset detection type comprises a preset concentration type or a preset comprehension type;
the extracting the detection feature information from the facial organ information according to a preset detection type includes:
and extracting eye feature information and mouth feature information included in the detection feature information from the facial organ information according to a preset concentration degree type or a preset comprehension degree type.
5. The method according to claim 4, wherein the extracting the eye feature information included in the detected feature information from the facial organ information according to a preset concentration type or a preset understanding type comprises:
acquiring original eye information according to a preset concentration degree type or a preset comprehension degree type and the facial organ information;
adjusting the original eye information based on a preset contrast value, a preset saturation value and/or a preset hue value associated with the sample form information to obtain final eye information;
and extracting eyelid contours and highlight contours in eyeballs included in the eye feature information from the final eye information.
6. The method according to claim 4, wherein the extracting mouth feature information included in the detected feature information from the facial organ information according to a preset concentration type or a preset understanding type includes:
acquiring original mouth information according to a preset concentration type or a preset comprehension type and the facial organ information;
extracting an outer contour and an inner contour of a lip included in the mouth feature information from the mouth information;
analyzing an upper contour and a lower contour of the outer contour, and determining the intersection point position of the upper contour and the lower contour as a mouth angle position;
extracting pixels adjacent to each other satisfying a preset RGB threshold from the mouth information outside the outer contour as labial sulcus information included in the mouth feature information, wherein the pixels adjacent to each other are adjacent to the pixel at the mouth corner position.
7. The method of claim 6, wherein said analyzing an upper contour and a lower contour of said outer contour, determining a same or adjacent pixel position of said upper contour and said lower contour as a mouth angle position, comprises:
determining the opening direction perpendicular to the mouth as a scanning direction;
scanning pixels of the upper contour and pixels of the lower contour along the scanning direction;
and when the pixel of the upper contour and the pixel of the lower contour are at the same or adjacent pixel positions, determining the pixel position as the mouth corner position.
8. An apparatus for evaluating learning effects of students, comprising:
the system comprises an image acquisition unit, a video acquisition unit and a video display unit, wherein the image acquisition unit is used for acquiring a student video transmitted by a terminal for logging in a live teaching classroom, and a frame image of the student video comprises a face image of a student;
an acquisition feature unit configured to acquire detection feature information based on a preset detection type and the face image;
the matching unit is used for respectively carrying out similarity matching on the detection characteristic information and various pre-stored sample form information of the students to obtain similarity matching values of the sample form information;
the judging unit is used for judging whether the similarity matching value meets a preset similarity threshold value or not;
and if so, acquiring a preset evaluation value corresponding to the sample form information of the similarity matching value.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 7.
CN202110224182.0A 2021-03-01 2021-03-01 Method, device, medium and electronic equipment for evaluating learning effect of students Pending CN112907408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110224182.0A CN112907408A (en) 2021-03-01 2021-03-01 Method, device, medium and electronic equipment for evaluating learning effect of students

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110224182.0A CN112907408A (en) 2021-03-01 2021-03-01 Method, device, medium and electronic equipment for evaluating learning effect of students

Publications (1)

Publication Number Publication Date
CN112907408A true CN112907408A (en) 2021-06-04

Family

ID=76108144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110224182.0A Pending CN112907408A (en) 2021-03-01 2021-03-01 Method, device, medium and electronic equipment for evaluating learning effect of students

Country Status (1)

Country Link
CN (1) CN112907408A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851216A (en) * 2017-03-10 2017-06-13 山东师范大学 A kind of classroom behavior monitoring system and method based on face and speech recognition
WO2017215315A1 (en) * 2016-06-12 2017-12-21 杭州海康威视***技术有限公司 Attendance monitoring method, system and apparatus for teacher during class
CN109214966A (en) * 2018-10-25 2019-01-15 重庆鲁班机器人技术研究院有限公司 Learning effect acquisition methods, device and electronic equipment
CN110363084A (en) * 2019-06-10 2019-10-22 北京大米科技有限公司 A kind of class state detection method, device, storage medium and electronics
CN111027486A (en) * 2019-12-11 2020-04-17 李思娴 Auxiliary analysis and evaluation system and method for big data of teaching effect of primary and secondary school classroom
WO2020078464A1 (en) * 2018-10-19 2020-04-23 上海商汤智能科技有限公司 Driving state detection method and apparatus, driver monitoring system, and vehicle
CN111242049A (en) * 2020-01-15 2020-06-05 武汉科技大学 Student online class learning state evaluation method and system based on facial recognition
CN111800646A (en) * 2020-06-24 2020-10-20 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for monitoring teaching effect
CN111862705A (en) * 2020-06-24 2020-10-30 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for prompting live broadcast teaching target

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215315A1 (en) * 2016-06-12 2017-12-21 杭州海康威视***技术有限公司 Attendance monitoring method, system and apparatus for teacher during class
CN106851216A (en) * 2017-03-10 2017-06-13 山东师范大学 A kind of classroom behavior monitoring system and method based on face and speech recognition
WO2020078464A1 (en) * 2018-10-19 2020-04-23 上海商汤智能科技有限公司 Driving state detection method and apparatus, driver monitoring system, and vehicle
CN109214966A (en) * 2018-10-25 2019-01-15 重庆鲁班机器人技术研究院有限公司 Learning effect acquisition methods, device and electronic equipment
CN110363084A (en) * 2019-06-10 2019-10-22 北京大米科技有限公司 A kind of class state detection method, device, storage medium and electronics
CN111027486A (en) * 2019-12-11 2020-04-17 李思娴 Auxiliary analysis and evaluation system and method for big data of teaching effect of primary and secondary school classroom
CN111242049A (en) * 2020-01-15 2020-06-05 武汉科技大学 Student online class learning state evaluation method and system based on facial recognition
CN111800646A (en) * 2020-06-24 2020-10-20 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for monitoring teaching effect
CN111862705A (en) * 2020-06-24 2020-10-30 北京安博盛赢教育科技有限责任公司 Method, device, medium and electronic equipment for prompting live broadcast teaching target

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王万森;温绍洁;郭凤英;: "基于人工情绪的智能情感网络教学***研究", 小型微型计算机***, no. 03, pages 569 - 572 *

Similar Documents

Publication Publication Date Title
US11734851B2 (en) Face key point detection method and apparatus, storage medium, and electronic device
CN107578017B (en) Method and apparatus for generating image
CN109902659B (en) Method and apparatus for processing human body image
CN109614934B (en) Online teaching quality assessment parameter generation method and device
US11436863B2 (en) Method and apparatus for outputting data
CN109784304B (en) Method and apparatus for labeling dental images
US11443438B2 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN108388889B (en) Method and device for analyzing face image
CN110059623B (en) Method and apparatus for generating information
CN110059624B (en) Method and apparatus for detecting living body
CN112017257B (en) Image processing method, apparatus and storage medium
CN111310815A (en) Image recognition method and device, electronic equipment and storage medium
WO2022233223A1 (en) Image splicing method and apparatus, and device and medium
CN111783626A (en) Image recognition method and device, electronic equipment and storage medium
WO2023125365A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN109285181A (en) The method and apparatus of image for identification
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN113283383A (en) Live broadcast behavior recognition method, device, equipment and readable medium
CN113110733A (en) Virtual field interaction method and system based on remote duplex
CN112087590A (en) Image processing method, device, system and computer storage medium
CN112101258A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111260756B (en) Method and device for transmitting information
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN112231023A (en) Information display method, device, equipment and storage medium
CN112907408A (en) Method, device, medium and electronic equipment for evaluating learning effect of students

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 1202, 12 / F, building 1, yard 54, Shijingshan Road, Shijingshan District, Beijing

Applicant after: Oook (Beijing) Education Technology Co.,Ltd.

Address before: 100041 1202, 12 / F, building 1, yard 54, Shijingshan Road, Shijingshan District, Beijing

Applicant before: Beijing Anbo chuangying Education Technology Co.,Ltd.