CN111008542A - Object concentration analysis method and device, electronic terminal and storage medium - Google Patents

Object concentration analysis method and device, electronic terminal and storage medium Download PDF

Info

Publication number
CN111008542A
CN111008542A CN201811166639.1A CN201811166639A CN111008542A CN 111008542 A CN111008542 A CN 111008542A CN 201811166639 A CN201811166639 A CN 201811166639A CN 111008542 A CN111008542 A CN 111008542A
Authority
CN
China
Prior art keywords
concentration
concentration degree
information
degree
analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811166639.1A
Other languages
Chinese (zh)
Inventor
郑文丞
张建华
姜远航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wind Creation Information Consulting Co Ltd
Original Assignee
Shanghai Wind Creation Information Consulting Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wind Creation Information Consulting Co Ltd filed Critical Shanghai Wind Creation Information Consulting Co Ltd
Priority to CN201811166639.1A priority Critical patent/CN111008542A/en
Publication of CN111008542A publication Critical patent/CN111008542A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an object concentration degree analysis method and device, an electronic terminal and a storage medium, wherein the concentration degree of an object is obtained through acquiring the face information of the object and extracting sight line information and/or emotion information according to the face image of the object, and then the concentration degree of the object is obtained through analysis.

Description

Object concentration analysis method and device, electronic terminal and storage medium
Technical Field
The invention relates to the technical field of image data processing, in particular to an object concentration degree analysis method and device, an electronic terminal and a storage medium.
Background
With the continuous progress of information technology and computer technology, the field of internet online education or online training is rapidly developed. However, in the process of online education or online training, due to the fact that close-range or face-to-face observation cannot be achieved, attention of audience objects cannot be known timely, a teacher or a lecturer cannot give feedback or correct timely, and learning efficiency is low. Therefore, how to accurately and efficiently judge the concentration degree of audience objects in the online education or training process becomes a problem to be solved urgently in the field.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides an object concentration analysis method, device, electronic terminal and storage medium, which are used to solve the problem in the prior art that the concentration of an audience object in an online education or training process cannot be accurately and efficiently determined.
To achieve the above and other related objects, the present invention provides a method for analyzing concentration of a subject, the method comprising: acquiring object face image information; extracting line-of-sight information and/or emotion information from the subject face image; obtaining a first concentration degree according to the sight line information and/or obtaining a second concentration degree according to the emotion information; the first concentration and/or second concentration is subdivided into a high concentration or a low concentration, respectively; and obtaining a final concentration degree according to the first concentration degree and/or the second concentration degree.
In an embodiment of the invention, the method for obtaining the final concentration degree according to the first concentration degree and the second concentration degree includes: if the first concentration degree and the second concentration degree are both high concentration degrees, the final concentration degree is the high concentration degree; if not, the final concentration degree is a low concentration degree.
In an embodiment of the present invention, the method for extracting the gaze information includes: and judging whether the eye sight line of the object falls into the range of the object attention-required target or not according to the information of the placing position of the acquisition device of the face image of the object, the position of the object attention-required target and the size of the object attention-required target, which is obtained in advance.
In an embodiment of the present invention, the method for extracting gaze information further includes: and when the eyes of the object are in a closed state, judging that the sight line of the eyes of the object does not fall into the range of the object needing to pay attention to.
In an embodiment of the invention, the method for obtaining the first concentration degree according to the sight line information includes: presetting a first time threshold, wherein if the frequency of the sight line of the eyes of the object in the sight line information falling into the range of the target to be watched by the object is higher than the frequency of the sight line of the eyes of the object not falling into the range of the target to be watched by the object, the first concentration degree is high concentration degree; if not, the first concentration degree is a low concentration degree.
In an embodiment of the present invention, the method for extracting emotion information includes: judging the corresponding emotion according to the shape of the five sense organs and the relative positions of the five sense organs in the face image information of the object; the emotions include: concentration mood including any one or a combination of calm, happy, and surprise; and non-attentive emotions including any one or a combination of disgust, anger, and fear.
In an embodiment of the invention, the method for obtaining the second concentration degree according to the emotion information includes: presetting a second time threshold, and if the frequency of the emotion information is judged to be higher than the frequency of the non-concentration emotion, determining that the second concentration degree is high concentration degree; if not, the second concentration degree is a low concentration degree.
To achieve the above and other related objects, the present invention provides an object concentration degree analyzing apparatus, including: the acquisition module is used for acquiring the image information of the face of the object; a processing module for extracting line of sight information and/or emotion information from the subject face image; obtaining a first concentration degree according to the sight line information; and/or obtaining a second concentration degree according to the emotion information; and obtaining a final concentration degree according to the first concentration degree and/or the second concentration degree.
To achieve the above and other related objects, the present invention provides an electronic terminal, comprising: a processor, and a memory; the memory is used for storing a computer program, the processor is used for implementing the object concentration analysis method when executing the computer program stored in the memory, and the communicator is used for being in communication connection with an external device.
To achieve the above and other related objects, the present invention provides a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, implements the method for concentration analysis of an object as described above.
As described above, according to the method, the device, the electronic terminal and the storage medium for analyzing concentration of the object of the present invention, the concentration of the object is obtained by acquiring the face information of the object, extracting the emotion information and/or the sight line information according to the face image of the object, and analyzing the emotion information and/or the sight line information. Has the following beneficial effects: can be simple, accurate understanding audience object's the degree of being absorbed in, and then can help the person of imparting knowledge to students in time to make relevant feedback, promote learning efficiency.
Drawings
Fig. 1 is a schematic flow chart illustrating an object concentration analysis method according to an embodiment of the present invention.
Fig. 2 is a schematic block diagram of an object concentration analysis apparatus according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an electronic terminal according to an embodiment of the invention.
Description of the element reference numerals
Method steps S101 to S104
201 acquisition module
202 processing module
301 processor
302 memory
303 communicator
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The object concentration degree analysis method, the object concentration degree analysis device, the electronic terminal and the storage medium are used for solving the problem that the concentration degree of an audience object in an online education or training process cannot be accurately and efficiently judged in the prior art.
Fig. 1 is a schematic flow chart showing an object concentration analysis method according to an embodiment of the present invention. The method comprises the following steps:
step S101: subject face image information is acquired.
In an embodiment of the present invention, the facial image information may be obtained by capturing a video of the taught object, or by capturing the captured video, or by capturing the taught object to obtain an image set.
In an embodiment of the invention, the image information of the face of the subject may be directly obtained by connecting with a shooting device, or may be data including the image information of the face of the subject, such as other devices storing the image information of the face of the subject. The data of the face information can be a complete video stream, or a face image set, or extracted from a video stream at certain time intervals.
For example, the taught object may learn through any one of a desktop computer, a laptop computer, a tablet computer, and a mobile terminal, and the device is provided with a camera. In the process of learning of the taught object, the camera records video or takes pictures at certain time intervals in real time on line, and then facial information data of the taught object in the learning process can be obtained.
In an embodiment of the present invention, the time interval for photographing or extracting at a certain time interval is not too long. If the time is too long, the facial information presented in the images at intervals before and after the time is discontinuous, the timeliness is lost, and inaccuracy is easy to cause. Such as looking elsewhere or closing the eyes as occurs in the acquired image information of the face of the subject. But the reality may be that there is no intention to relay the next head, or to close the eyes, and not to represent that the audience object is inattentive.
In an embodiment of the present invention, the preferred interval time unit is extracted as a frame from the video stream at a certain time interval, and a certain number of frames is used as an extraction condition. Alternatively, the photographing is performed at regular intervals for a period of time of seconds, for example, for 5 seconds.
In an embodiment of the invention, although the attentiveness of the audience object can be recorded and analyzed in real time through the video data, the change of the attentiveness of the audience object in the learning process can be better reflected, and the analysis is more accurate. However, performing the analysis in real time increases the computation time of recognition and increases the storage burden, so the present invention provides a variety of options for video and image to be suitable for use in practical scenes.
Step S102: line-of-sight information and/or emotion information is extracted from the subject face image.
In an embodiment of the present invention, the method for extracting the gaze information includes: and judging whether the eye sight line of the object falls into the range of the object attention-required target or not according to the information of the placing position of the acquisition device of the face image information of the object, the position of the object attention-required target and the size of the object attention-required target, which is obtained in advance.
In an embodiment of the invention, the information of the placement position of the device for acquiring the face image of the object, the position of the target to be focused on by the object, and the size of the target to be focused on by the object is specifically world coordinate information, and these coordinate information can be known in advance. By constructing a unified world coordinate system, the spatial relative position relationship such as the position of the acquisition device, the position of the target to be noted by the object, the size of the target to be noted by the object and the like can be converted into machine language or digital information, so that the subsequent algorithm can be calculated conveniently.
For example, assuming that the student realizes online learning through a desktop computer, the device for acquiring the facial image of the object is a camera connected with the desktop computer, the object needs to be focused on the display screen or the teaching content window in the display screen, and the size of the object needs to be focused on the display screen or the teaching content window. And their world coordinates may be known in advance, or adjusted. Thus, a physical world coordinate system which can be perceived by a machine is formed, and the position of the camera, the position and the size of the display screen or the position and the size of the teaching content window and the relative position relation of the camera, the display screen and the teaching content window can be clearly known.
In an embodiment of the present invention, the eye gaze vector of the eye in the image can be calculated by combining the eye-based gaze detection algorithm when the pupil of the eye of the subject can be seen in the facial image information of the subject, and then the eye gaze vector is fused into the known physical world coordinate system so as to determine whether the eye gaze of the eye of the subject falls within the range of the target to be focused on by the subject.
In an embodiment of the present invention, there are more eye-based gaze detection algorithms, and the methods that do not use external devices, such as infrared light emitting devices or special lenses, but only rely on image processing include: eye contour-based visual line detection method, or iris and pupil-based visual line detection method.
The sight line detection method based on the eye contour mainly distinguishes looking up, looking down and looking up, then calculates the ratio of the square of the distance from a pupil point to the left edge of an eye socket and the square sum of the distance from the pupil point to the right edge to distinguish the left, middle and right looking, then applies image pre-filtering to balance the filtered qualified images, extracts the images of the eyes and the pupil after binarization, and judges the gazing direction and sight line vector of the eyes according to the result. The method is suitable for occasions with low implementation requirements but high essential requirements.
The sight line detection method based on the irises and the pupils extracts the effective edge pixel points of the irises of the left eye and the right eye respectively, then performs ellipse fitting on the obtained edge pixel points of the irises to obtain ellipse parameters, and then calculates the normal vector of a common mode plane and the coordinates of the center points of the irises to further calculate and obtain the actual sight line vector and the actual focal distance vector.
In an embodiment of the present invention, the method for extracting gaze information further includes: and when the eyes of the object are in a closed state, judging that the sight line of the pupils of the eyes of the object does not fall into the range of the object to be focused on.
For example, if information about the pupil line of sight of the eye of the audience object cannot be obtained, such as the eye is closed, it may also indicate that the audience object is not highly concerned.
In an embodiment of the present invention, by additionally determining whether eyes of the audience object are closed, accuracy of the line-of-sight information determination is increased.
In an embodiment of the present invention, the method for extracting emotion information includes: judging the corresponding emotion according to the shape of the five sense organs and the relative positions of the five sense organs in the face image information of the object; the emotions include: concentration mood including any one or a combination of calm, happy, and surprise; and non-attentive emotions including any one or a combination of disgust, anger, and fear.
In an embodiment of the invention, the shape and the relative position of the facial features are obtained through the obtained facial image information, and comprehensive judgment is carried out by combining a large amount of facial emotion reference data stored in an emotion library.
For example, if teeth are leaked from the mouth of the face image and the mouth corners at two sides are tilted upward, it can be determined that the subject is happy at this time.
In one embodiment of the invention, by combining the experience of the actual teaching or training scenario, during the teaching or training process, the audience subjects often exhibit calm emotions when they listen and talk with great concentration, or may exhibit happy emotions when they listen to interesting content, or may exhibit surprise emotions when they listen to less-understood content. These emotions are all normal emotions that occur during the careful listening and speaking, and thus the present invention will address the attentive emotions in any one or combination of calm, happy, and surprise.
Accordingly, in the teaching or training process, the audience subject often shows an aversion emotion when being impatient to the teaching content or the training content, or may show anger or fear emotion when being criticized or influenced by other things, which generally brings a profound negative effect, so that the audience subject cannot listen to the following content with concentration. The presence of these emotions often prevents the audience subject from listening and speaking with great concentration, and thus the present invention will be directed to non-attentive emotions in any one or a combination of disgust, anger, and fear.
In an embodiment of the present invention, the method or algorithm for extracting emotion information from a face image is known in the art, or implemented by a face recognizer or software.
Step S103: obtaining a first concentration degree according to the sight line information and/or obtaining a second concentration degree according to the emotion information; the first concentration and/or the second concentration is subdivided into a high concentration or a low concentration, respectively.
In an embodiment of the invention, the method for obtaining the first concentration degree according to the sight line information includes: presetting a first time threshold, wherein if the frequency of the eye sight line of the object in the sight line information falling into the range of the object needing attention is higher than the frequency of the eye sight line of the object not falling into the range of the object needing attention, the first concentration degree is high concentration degree; if not, the first concentration degree is a low concentration degree.
In an embodiment of the invention, the result of determining whether the gaze of the pupil of the eye of the subject falls within the range of the target to be focused on is obtained through the gaze information extracted from the facial image of the subject in step S102, and the first concentration degree is determined to be high concentration degree or low concentration degree according to the obtained result.
In an embodiment of the invention, the preset first time threshold is a multiple of an interval time set when the subject takes a picture or extracts an image from a video taken for the subject. Namely, the first time threshold includes at least 1 time of the interval time, so as to obtain a result of judging whether the sight line of the pupil of the eye of the subject falls within the range of the target to be focused by the subject at least once.
For example, assuming that the interval time between capturing images or extracting images from captured video is 2 seconds, the first time threshold may be set to 20 seconds. Within the time of the first time threshold, 10 times of results of determining whether the line of sight of the pupil of the eye of the subject falls within the range of the target to be focused on by the subject are included, and assuming that the number of times of the results falling within the range is 7 times and the number of times of the results not falling within the range is 3 times, it can be concluded that the first concentration degree within the time of the first time threshold at this time is a high concentration degree. If the number of times of the results falling within the range is equal to or less than the number of times of the results not falling within the range, the first concentration level within the first time threshold time this time is a low concentration level.
In an embodiment of the invention, the method for obtaining the second concentration degree according to the emotion information includes: presetting a second time threshold, and if the frequency of the emotion information is judged to be higher than the frequency of the non-concentration emotion, determining that the second concentration degree is high concentration degree; if not, the second concentration degree is a low concentration degree.
In an embodiment of the invention, the emotion information extracted from the face image of the subject in step S102 is used to determine whether the emotion information of the subject is a focused emotion or a non-focused emotion, and finally, the first concentration degree is determined to be a high concentration degree or a low concentration degree according to the obtained result.
In an embodiment of the invention, the preset second time threshold is a multiple of an interval time set when the subject takes a picture or extracts an image from a video taken for the subject. I.e. the first time threshold comprises at least 1 times the interval time to obtain at least one result of determining whether the subject's emotional information is a concentration emotion or a non-concentration emotion.
For example, assuming that the interval time between capturing images or extracting images from captured video is 2 seconds, the second time threshold may be set to 10 seconds. Within the time of the second time threshold, 5 times of results of determining whether the emotional information of the subject is a focused emotion or a non-focused emotion are included, and assuming that the number of times of focused emotional results is 3 times and the number of times of non-focused emotional results is 2 times, it can be concluded that the second degree of concentration within the second time threshold time of this time is high degree of concentration. The second concentration level within the second time threshold time this time is a low concentration level if the number of concentration emotional results is equal to or less than the number of non-concentration emotional results.
In an embodiment of the present invention, when the situation that the gaze information of unconscious head twisting or unconscious eye closing, or the emotional information of sudden expression change, etc. cannot represent the real concentration degree occurs, the influence caused by the situation that the real concentration degree cannot be represented can be better avoided by analyzing the result obtained by extracting the gaze information and/or the emotional information from the face image of the subject for multiple times and judging.
Step S104: and obtaining a final concentration degree according to the first concentration degree and/or the second concentration degree.
In an embodiment of the invention, the final concentration degree can be obtained by the first concentration degree or the second concentration degree, i.e. the final concentration degree is the first concentration degree or the second concentration degree.
In an embodiment of the invention, the final concentration degree may further be combined with the first concentration degree and the second concentration degree.
In an embodiment of the invention, the method for obtaining the final concentration degree according to the first concentration degree and the second concentration degree includes: if the first concentration degree and the second concentration degree are both high concentration degrees, the final concentration degree is the high concentration degree; if not, the final concentration degree is a low concentration degree.
In an embodiment of the invention, when combining the first concentration degree and the second concentration degree, the first time threshold and the second time threshold are required to be the same. The first concentration degree and the second concentration degree are obtained according to the sight line information and the emotion information in the same time threshold time respectively, and then the final concentration degree in the time threshold time is further obtained according to the obtained first concentration degree and the second concentration degree.
In an embodiment of the invention, if at least one of the first concentration degree and the second concentration degree is a low concentration degree, the final concentration degree is a low concentration degree.
In an embodiment of the invention, when the final concentration of the obtained audience object is low, the teaching or training teacher is reminded to timely know that the object is not concentrated. Then, the teacher can prompt the client of the audience object through a live broadcast, or send a reminding message to the client of the audience object, or automatically pop up the reminding message at the client of the audience object to remind the audience object of low current concentration, so that the audience object can timely adjust the concentration and the learning efficiency in the learning process.
As shown in fig. 2, a schematic block diagram of an apparatus for analyzing concentration of an object according to the present invention is shown. As shown in the figure, the method comprises the following steps: an acquisition module 201 for acquiring image information of a face of a subject; a processing module 202 for extracting emotion information and/or sight line information from the subject face image; obtaining a first concentration degree according to the emotion information; or obtaining a second concentration degree according to the sight line information; and obtaining a final concentration degree according to the first concentration degree and/or the second concentration degree.
In an embodiment of the present invention, the facial image information acquired by the acquiring module 201 may be acquired by capturing a video of the subject, or by capturing the captured video, or by capturing an image set of the subject.
For example, the taught object learns through any one of a desktop computer, a notebook computer, a tablet computer, and a mobile terminal, and the device is provided with a camera.
In an embodiment of the invention, the processing module 202 is configured to execute the method for analyzing concentration of the subject as shown in steps S102-S104 of fig. 1.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the extraction module 202 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the function of the judgment module. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Fig. 3 is a schematic structural diagram of an electronic terminal according to an embodiment of the invention. As shown in the figure, the method comprises the following steps: a processor 301, a memory 302, and a communicator 303; the memory 302 is used for storing computer programs, the processor 301 is used for executing the computer programs stored in the memory 302 so as to enable the electronic terminal to execute the object concentration analysis method in the embodiment of fig. 1, and the communicator 303 is used for communicating and connecting external equipment.
In an embodiment of the invention, the external device communicatively connected to the communicator 303 may be a device providing image information of a face of a subject, such as a camera.
The processor 301 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
The memory 302 may include a Random Access Memory (RAM), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory.
The communicator 303 is used to implement a communication link between the database access device and other devices (e.g., client, read-write library, and read-only library), which may be any suitable combination of one or more wired and/or wireless networks. For example, the communication method is a network communication method, and includes: any one or more of the internet, an intranet, a Wide Area Network (WAN), a Local Area Network (LAN), a wireless network, a Digital Subscriber Line (DSL) network, a frame relay network, an Asynchronous Transfer Mode (ATM) network, a Virtual Private Network (VPN), and/or any other suitable communication network.
To achieve the above and other related objects, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the subject concentration analysis method.
It should be noted that, as can be understood by those skilled in the art: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In summary, according to the object concentration degree analysis method and device, the electronic terminal and the storage medium, the concentration degree of the object is obtained through obtaining the face information of the object and extracting the emotion information and/or sight line information according to the face image of the object, and then the concentration degree of the object is obtained through analysis. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A method for concentration analysis of an object, the method comprising:
acquiring object face image information;
extracting line-of-sight information and/or emotion information from the subject face image;
obtaining a first concentration degree according to the sight line information and/or obtaining a second concentration degree according to the emotion information; the first concentration and/or second concentration is subdivided into a high concentration or a low concentration, respectively;
and obtaining a final concentration degree according to the first concentration degree and/or the second concentration degree.
2. The method of object concentration analysis of claim 1, wherein the method of deriving a final concentration from the first concentration and the second concentration comprises:
if the first concentration degree and the second concentration degree are both high concentration degrees, the final concentration degree is the high concentration degree; if not, the final concentration degree is a low concentration degree.
3. The method of object concentration analysis of claim 1, wherein the method of extracting gaze information comprises:
and judging whether the eye sight line of the object falls into the range of the object attention-required target or not according to the information of the placing position of the acquisition device of the face image of the object, the position of the object attention-required target and the size of the object attention-required target, which is obtained in advance.
4. The method of object concentration analysis of claim 3, wherein the method of extracting gaze information further comprises: and when the eyes of the object are in a closed state, judging that the sight line of the eyes of the object does not fall into the range of the object needing to pay attention to.
5. The method of object concentration analysis of claim 3, wherein the method of deriving a first concentration from the gaze information comprises:
presetting a first time threshold, wherein if the frequency of the sight line of the pupil of the eye of the object in the sight line information falling into the range of the target to be watched by the object is higher than the frequency of the sight line of the pupil of the eye of the object not falling into the range of the target to be watched by the object, the first concentration degree is high concentration degree; if not, the first concentration degree is a low concentration degree.
6. The method of concentration analysis of a subject of claim 1, wherein the method of extracting emotional information comprises:
judging the corresponding emotion according to the shape of the five sense organs and the relative positions of the five sense organs in the face image information of the object; the emotions include: concentration mood including any one or a combination of calm, happy, and surprise; and non-attentive emotions including any one or a combination of disgust, anger, and fear.
7. The method of subject concentration analysis of claim 6, wherein the method of deriving a second concentration from the mood information comprises:
presetting a second time threshold, and if the frequency of the emotion information is judged to be higher than the frequency of the non-concentration emotion, determining that the second concentration degree is high concentration degree; if not, the second concentration degree is a low concentration degree.
8. An object concentration degree analysis apparatus, comprising:
the acquisition module is used for acquiring the image information of the face of the object;
a processing module for extracting line of sight information and/or emotion information from the subject face image; obtaining a first concentration degree according to the sight line information; and/or obtaining a second concentration degree according to the emotion information; and obtaining a final concentration degree according to the first concentration degree and/or the second concentration degree.
9. An electronic terminal, comprising: a processor, a memory, and a communicator;
the memory is used for storing a computer program, the processor is used for implementing the object concentration analysis method of any one of claims 1 to 7 when executing the computer program stored in the memory, and the communicator is used for being in communication connection with an external device.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method for subject concentration analysis of any one of claims 1 to 7.
CN201811166639.1A 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium Pending CN111008542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811166639.1A CN111008542A (en) 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811166639.1A CN111008542A (en) 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium

Publications (1)

Publication Number Publication Date
CN111008542A true CN111008542A (en) 2020-04-14

Family

ID=70111525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811166639.1A Pending CN111008542A (en) 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111008542A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652045A (en) * 2020-04-17 2020-09-11 西北工业大学太仓长三角研究院 Classroom teaching quality assessment method and system
CN111914694A (en) * 2020-07-16 2020-11-10 哈尔滨工程大学 Class quality detection method based on face recognition
CN112329643A (en) * 2020-11-06 2021-02-05 重庆第二师范学院 Learning efficiency detection method, system, electronic device and medium
CN112597935A (en) * 2020-12-29 2021-04-02 北京影谱科技股份有限公司 Attention level detection method and device, computing equipment and storage medium
CN113326729A (en) * 2021-04-16 2021-08-31 合肥工业大学 Multi-mode classroom concentration detection method and device
CN113591515A (en) * 2020-04-30 2021-11-02 百度在线网络技术(北京)有限公司 Concentration processing method, device and storage medium
CN113641246A (en) * 2021-08-25 2021-11-12 兰州乐智教育科技有限责任公司 Method and device for determining user concentration degree, VR equipment and storage medium
CN113709568A (en) * 2021-08-31 2021-11-26 维沃移动通信有限公司 Concentration degree reminding method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652045A (en) * 2020-04-17 2020-09-11 西北工业大学太仓长三角研究院 Classroom teaching quality assessment method and system
CN111652045B (en) * 2020-04-17 2022-10-28 西北工业大学太仓长三角研究院 Classroom teaching quality assessment method and system
CN113591515A (en) * 2020-04-30 2021-11-02 百度在线网络技术(北京)有限公司 Concentration processing method, device and storage medium
CN113591515B (en) * 2020-04-30 2024-04-05 百度在线网络技术(北京)有限公司 Concentration degree processing method, device and storage medium
CN111914694A (en) * 2020-07-16 2020-11-10 哈尔滨工程大学 Class quality detection method based on face recognition
CN112329643A (en) * 2020-11-06 2021-02-05 重庆第二师范学院 Learning efficiency detection method, system, electronic device and medium
CN112597935A (en) * 2020-12-29 2021-04-02 北京影谱科技股份有限公司 Attention level detection method and device, computing equipment and storage medium
CN113326729A (en) * 2021-04-16 2021-08-31 合肥工业大学 Multi-mode classroom concentration detection method and device
CN113326729B (en) * 2021-04-16 2022-09-09 合肥工业大学 Multi-mode classroom concentration detection method and device
CN113641246A (en) * 2021-08-25 2021-11-12 兰州乐智教育科技有限责任公司 Method and device for determining user concentration degree, VR equipment and storage medium
CN113709568A (en) * 2021-08-31 2021-11-26 维沃移动通信有限公司 Concentration degree reminding method and device

Similar Documents

Publication Publication Date Title
CN111008542A (en) Object concentration analysis method and device, electronic terminal and storage medium
JP6849824B2 (en) Systems and methods for guiding users to take selfies
US10891873B2 (en) Method and apparatus for monitoring learning and electronic device
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
EP4198814A1 (en) Gaze correction method and apparatus for image, electronic device, computer-readable storage medium, and computer program product
WO2021077382A1 (en) Method and apparatus for determining learning state, and intelligent robot
TWI684159B (en) Instant monitoring method for interactive online teaching
WO2022156622A1 (en) Sight correction method and apparatus for face image, device, computer-readable storage medium, and computer program product
CN111144356B (en) Teacher sight following method and device for remote teaching
KR102336574B1 (en) Learning Instruction Method Using Video Images of Non-face-to-face Learners, and Management Server Used Therein
KR20200012355A (en) Online lecture monitoring method using constrained local model and Gabor wavelets-based face verification process
US11295117B2 (en) Facial modelling and matching systems and methods
CN111008971A (en) Aesthetic quality evaluation method of group photo image and real-time shooting guidance system
CN108133189B (en) Hospital waiting information display method
CN114926889B (en) Job submission method and device, electronic equipment and storage medium
Chukoskie et al. Quantifying gaze behavior during real-world interactions using automated object, face, and fixation detection
CN112883851A (en) Learning state detection method and device, electronic equipment and storage medium
Healy et al. Detecting demeanor for healthcare with machine learning
CN113536893A (en) Online teaching learning concentration degree identification method, device, system and medium
CN113591562A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116994465A (en) Intelligent teaching method, system and storage medium based on Internet
CN110543813B (en) Face image and gaze counting method and system based on scene
CN116434253A (en) Image processing method, device, equipment, storage medium and product
TWI709117B (en) Cloud intelligent object image recognition system
KR101570870B1 (en) System for estimating difficulty level of video using video watching history based on eyeball recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination