CN111144321A - Concentration degree detection method, device, equipment and storage medium - Google Patents

Concentration degree detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN111144321A
CN111144321A CN201911383377.9A CN201911383377A CN111144321A CN 111144321 A CN111144321 A CN 111144321A CN 201911383377 A CN201911383377 A CN 201911383377A CN 111144321 A CN111144321 A CN 111144321A
Authority
CN
China
Prior art keywords
concentration
detected
video frame
result
frame sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911383377.9A
Other languages
Chinese (zh)
Other versions
CN111144321B (en
Inventor
汤炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rubu Technology Co ltd
Original Assignee
Beijing Roobo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Roobo Technology Co ltd filed Critical Beijing Roobo Technology Co ltd
Priority to CN201911383377.9A priority Critical patent/CN111144321B/en
Publication of CN111144321A publication Critical patent/CN111144321A/en
Application granted granted Critical
Publication of CN111144321B publication Critical patent/CN111144321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a concentration degree detection method, a concentration degree detection device, concentration degree detection equipment and a storage medium. The concentration detection method comprises the following steps: acquiring a video frame sequence of an object to be detected at fixed time; determining the action type and concentration result of the object to be detected according to the video frame sequence based on a concentration detection model; generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected; wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result. The method and the device for detecting the concentration degree of the video frame sequence based on the timing acquisition can realize real-time reminding of the concentration degree of the object to be detected and improve the experience of a user. And the concentration detection result is determined according to the context information between the continuous frames in the video frame sequence, so that the accuracy of the concentration detection is improved.

Description

Concentration degree detection method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of education, in particular to a concentration degree detection method, a concentration degree detection device, concentration degree detection equipment and a storage medium.
Background
Concentration degree is the effective mode of detection people work and learning efficiency, concentration degree detection has important meaning in fields such as teaching, for example judge the concentration degree of study in online course study, domestic education robot equipment judges the concentration degree of student at the in-process of supplementary teaching and giving lessons etc., the intervention application of concentration degree detection in these scenes helps on the one hand can remind the user in real time when discovering that the user is not concentrating on, on the other hand also helps the product side to carry out relevant follow-up improvement after carrying out statistical analysis to the data of relevant scene with the use sense that improves the user.
Currently, common methods for detecting concentration include: an expert online monitoring method; monitoring data such as eye sight, brain waves and the like of a detection object through the wearable device, and calculating the concentration degree according to the data; and identifying the single-frame image in the acquired video to further determine the concentration degree of the user.
However, the expert online monitoring method lacks automatic monitoring, is easily influenced by subjective factors, and has low accuracy of the obtained concentration. The wearable device needs to detect that the object additionally wears related equipment, so that the freedom and the comfort of the object are affected, and the detection cost is increased. The context information between continuous frames is not considered in the identification through the single-frame image, the phenomenon that the results of the front frame and the rear frame are inconsistent or unstable in fluctuation is easy to occur in the identification result, and the accuracy of final determination of the concentration degree is influenced.
Disclosure of Invention
The embodiment of the invention provides a concentration degree detection method, a concentration degree detection device, concentration degree detection equipment and a storage medium.
In a first aspect, an embodiment of the present invention provides a concentration detection method, including:
acquiring a video frame sequence of an object to be detected at fixed time;
determining the action type and concentration result of the object to be detected according to the video frame sequence based on a concentration detection model;
generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected;
wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result.
In a second aspect, an embodiment of the present invention further provides a concentration degree detection apparatus, including:
the video frame sequence acquisition module is used for acquiring the video frame sequence of the object to be detected at fixed time;
the concentration result determining module is used for determining the action type and the concentration result of the object to be detected according to the video frame sequence based on a concentration detection model;
the response reminding information generating module is used for generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected;
wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result.
In a third aspect, an embodiment of the present invention further provides a computer device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of concentration detection as described in any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the concentration detection method according to any embodiment of the present invention.
The method and the device for detecting the concentration degree of the video frame sequence based on the timing acquisition can realize real-time reminding of the concentration degree of the object to be detected and improve the experience of a user. And the concentration detection is carried out on the video frame sequence based on the concentration detection model, and the detection result is determined according to the context information between the continuous frames in the video frame sequence, so that the accuracy of the concentration detection is improved.
Drawings
FIG. 1 is a flowchart of a concentration detection method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a concentration detection method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a concentration degree detection apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a concentration detection method according to a first embodiment of the present invention, which is applicable to a case where the concentration of a user during learning is detected in real time. The method may be performed by a concentration detection apparatus, which may be implemented in software and/or hardware, and may be configured in a computer device, for example, the computer device may be a device with communication and computing capabilities, such as a backend server. As shown in fig. 1, the method specifically includes:
step 101, obtaining a video frame sequence of an object to be detected at regular time.
The object to be detected is a user needing to perform learning state detection, for example, for a student who is performing online course learning, the learning concentration degree of the student needs to be detected, and the student is the object to be detected; or, when the household education robot gives a teaching to the user, the real-time learning state of the user is detected, which is beneficial to improving the teaching quality, and the user who is receiving the teaching is the object to be detected. The video frame sequence refers to a video obtained by monitoring the current learning state of the object to be detected, for example, the video frame sequence learned by the user obtained from a camera in front of the user when the user is performing online course learning.
Specifically, a camera is placed in front of or in oblique front of the object to be detected and used for collecting a video containing the object to be detected in a picture. And performing equal-time segmentation on the video stream acquired by the camera according to the time set by the timer, and acquiring a video frame sequence after segmentation is completed. For example, for the process of online course learning, the set time of the timer is set to 10s, a video is recorded from the time of the beginning of the course on the picture where the student is located, and the video is segmented according to the interval of 10s to obtain a plurality of video segments, namely a video frame sequence. For example, a video segment of 0s to 10s is taken as the first video frame sequence, and when the timer reaches the trigger point, a video segment from the current time to the previous 10s is taken as the video frame sequence for determining the object to be detected.
By means of the timed acquisition of the video frame sequence and the detection of the concentration degree of the object to be detected in the subsequent video frame sequence, the timely response of the concentration degree of the object to be detected in the current video frame sequence which is obtained at the timed time is realized, the real-time detection of the concentration degree of the object to be detected is realized, and the experience of a user is improved.
And 102, determining the action type and concentration result of the object to be detected according to the video frame sequence based on a concentration detection model.
Wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result. The sample video frame sequence is a sequence formed by a plurality of image frames of the detected object in a pre-collected picture, and the length of the sample frame sequence is generally the same as that of the video frame of the detected object, for example, one video frame sequence represents a learning video of 10 s. The sample action type refers to a result of partitioning according to the behavior of a detection object in a video frame sequence, and exemplarily includes: direct vision, restlessness, corrugated spirit, writing, speech, and uncertainty. The sample concentration degree is used for describing the learning input degree of the detection object according to the concrete expression of the detection object in the video frame sequence, and exemplarily, the sample concentration degree can be divided into concentration and non-concentration, and further, the sample concentration degree can be divided into uncertainty, non-concentration, general concentration, full concentration and the like. The concentration detection model is a result obtained by training a video frame sequence labeled with sample action types and sample concentration results by adopting a deep learning algorithm.
Optionally, the concentration detection model is constructed based on a multi-task concentration detection model;
the multi-task concentration detection model is generated by a sample video frame sequence, a sample action type, a sample concentration result and sample object position information.
The multi-task means that the concentration detection model trains a plurality of tasks simultaneously during training, such as object position information training, action type training, concentration result training, and the like. The object position information refers to position coordinate information of a detected object on each frame of image in a video frame sequence, and the object position information is introduced to provide more information for action type identification and concentration result identification, so that the accuracy of the action type identification and the concentration result identification is improved. Illustratively, a convolutional layer and a softmax layer are introduced through a multi-task training mechanism to respectively classify the concentration result and the action type recognition result of the user, a frame regression layer is introduced through the multi-task training mechanism to predict the user position information, and the network can simultaneously learn the user position information condition enhancement feature expression corresponding to the related action, so as to promote the accuracy of the action type and the concentration result recognition. Optionally, the concentration detection model is constructed based on a convolutional neural network structure. Illustratively, in order to ensure the learning of the model on the time and space information in the video frame sequence, a multitask depth space-time 3D convolutional neural network structure may be adopted to learn the sample video frame sequence, the sample action type, the sample concentration result and the sample object position information, and the concentration detection model is trained.
And predicting the obtained video frame sequence of the object to be detected by adopting the trained concentration detection model, and outputting the action type and the concentration result of the object to be detected. Illustratively, in the stage of predicting the video frame sequence of the object to be detected, the output of the position information of the object to be detected is cancelled. Because the position information only plays a role in enhancing the feature expression in the network when the concentration degree of the object to be detected is detected, and the accuracy of detecting the concentration degree result is enhanced, the position information detection branch is cancelled, the calculation amount can be reduced, and the efficiency of detecting the concentration degree is improved. Illustratively, on the basis of the above example, when the video for online course learning reaches the time interval 10s set by the timer, the video within 10s before the current time is taken as the input video frame sequence of the concentration detection model, and the model outputs the action type and the concentration result of the learning object in the video.
Through the training process of introducing the multi-task learning strategy to the time-space convolution neural network, the detection of the position information of the object is synchronously performed except for identifying the action and the concentration degree of the object in the video, so that the convolution neural network model is promoted to learn richer space characteristics and time characteristics, and the accuracy of judging the action type identification result and the concentration degree result is improved.
And 103, generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected.
The response reminding information is used for feeding back the detection result to the object to be detected so as to play a role in reminding the object to be detected of knowing the current concentration condition.
Specifically, according to the preset corresponding relation between the concentration degree result and different response reminding mechanisms, the corresponding response reminding information is fed back to the object to be detected. For example, according to different situations of the concentration degree result and the action type recognition result, the voice is utilized to pertinently remind the object to be detected that the object to be detected can have a rest after working for a long time or the attention is insufficient to intensively influence the current learning effect, and the like, and the image transfer attention can be utilized to relax.
Optionally, generating response reminding information according to the action type and the concentration result of the object to be detected, including:
and if the concentration result of the object to be detected is not concentrated, generating response reminding information comprising the action type of the object to be detected.
Specifically, if it is detected that the concentration degree result output according to the concentration degree detection model is not concentrated, it is described that the learning state of the object to be detected at this time is not good, the learning efficiency may be affected, and in order to remind the object to be detected, the action type result output according to the concentration degree detection model is output as the response reminding information. Illustratively, on the basis of the above example, when the online learning course is conducted for 30 to 10 seconds, the timer is triggered, the video frame sequence of the learning course from 30 to 1 to 30 to 10 seconds is input into the concentration detection model, the action type result in the result output by the model is speaking, and the concentration result is not concentration, then a reminding message of "currently detecting that the student is speaking" is sent out through the voice device, or the reminding message is displayed on the course learning interface to warn the student to improve the concentration.
Optionally, generating response reminding information according to the action type and the concentration result of the object to be detected, including:
if the continuous N concentration degree results of the object to be detected are all concentrated, and the continuous concentration duration exceeds a threshold value, generating an overtime prompt.
Specifically, if the concentration degree result output according to the concentration degree detection model is detected to be concentration, the learning state of the object to be detected at the moment is good, and in order to ensure the learning efficiency, the object to be detected is continuously reminded to pay attention to the rest when the object to be detected is in the concentration state. Where N is set according to the length of the video frame sequence and a preset concentration duration threshold, for example, the threshold is set to 30 minutes, and the length of the video frame sequence acquired according to the timer is 10s, then the value of N may be set to 180. Illustratively, on the basis of the above example, if it is detected that the concentration detection model has continuously output 180 concentration results as concentration and the continuous concentration results are that the interval duration from the start time of the corresponding video frame sequence to the current time exceeds 30 minutes, timeout reminding information is generated, for example, to remind the student that "the learning time of the student is too long currently detected, to suggest to pause learning, to take a rest attention", and when the student selects to pause learning, an image for protecting eyesight can be displayed on the learning interface.
The response reminding information is generated according to the concentration degree result determined by the concentration degree detection model, so that the experience of the user is improved, and the learning efficiency of the user can be improved.
The method and the device for detecting the concentration degree of the learning course product can realize real-time reminding of the concentration degree of the object to be detected, improve the experience feeling and the learning efficiency of a user, and improve the relevant content of the learning course according to the data statistics of the concentration degree of the student on the learning course content. The method comprises the steps of performing concentration detection, action type detection and position information detection on a video frame sequence based on a multi-task concentration detection model, determining detection results according to context information between continuous frames in the video frame sequence, and enhancing expression of characteristic information by a multi-task learning mechanism so as to improve the accuracy of the concentration detection.
Example two
Fig. 2 is a flowchart of a concentration detection method in the second embodiment of the present invention, and the second embodiment of the present invention performs further optimization on the basis of the first embodiment of the present invention, obtains a learning score by analyzing concentration results of all video frame sequences in the whole learning process, realizes detection of the overall concentration of the object to be detected, and improves the integrity of concentration detection. As shown in fig. 2, the method includes:
step 201, a video frame sequence of an object to be detected is obtained at regular time.
Step 202, based on the concentration detection model, determining the action type and the concentration result of the object to be detected according to the video frame sequence.
And 203, generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected.
And 204, performing smooth filtering processing on the concentration result of each video frame sequence learned at this time to obtain a processed concentration result.
The learning refers to an overall learning process that requires overall evaluation of concentration, and for online course learning, the learning refers to a course from beginning to end.
Specifically, after the learning of one section of online course is finished, the concentration results of all the obtained video frame sequences are counted, and the statistical results are subjected to smooth filtering, so that the robustness of the concentration results in the whole learning course is improved. Illustratively, the concentration result of each video frame sequence is formed into a one-dimensional matrix according to a time sequence, the one-dimensional matrix is processed by a gaussian function, the processed matrix result is a processed concentration result, and the corresponding position in the matrix is still the concentration result of the corresponding video frame sequence.
The concentration result after the smoothing filtering process can reduce the phenomenon of inaccurate concentration identification, for example, for the individual non-concentration result in the concentration result, the result is processed into concentration by the smoothing filtering process, so that the possibility of misjudgment of the result is reduced. In addition, the concentration degree result after smooth filtering is not influenced by individual special cases, so that the result is more referential.
And step 205, determining the learning score of the object to be detected according to the ratio of the concentration time length to the total learning time length in the processed concentration degree result.
Specifically, the proportion of the video frame duration corresponding to the concentration result to the total duration of the study in the concentration result after the smoothing filtering process is determined, and the score of the object to be detected in the study is determined according to the corresponding relation between the preset proportion and the study score.
For example, on the basis of the above example, for the video frame sequence obtained through timer processing, the duration of the video frame sequence is fixed to 10s, the number of the special attention results in the processed duration result is counted, and the duration of the special attention can be determined according to the number and the unit duration, for example, after the course with the total duration of 30 minutes is ended, the number of the video frame sequences with the duration of 120, the duration of 120 × 10 is 1200s to 20 minutes, and the proportion of the duration of the special attention to the total duration of the present learning is 2/3. The preset corresponding relationship may be that when the ratio of the duration of concentration to the total duration of learning is greater than or equal to 0.6, the learning score of this time is 8 points, and when the ratio is less than 0.6, the corresponding learning score of this time is 6 points. The specific details may be subdivided according to actual situations, and are not limited herein. And under the condition that the concentration degree result classification is more precise, the proportion of the single concentration degree result or the plurality of concentration degree results in the total time length can be counted, the setting can be carried out according to the specific practical situation, and the setting is not limited herein.
The concentration degree result in a larger scale time range can be determined through the statistics of the concentration duration, the robustness of the learning score is improved, and students or product parties can integrally know the state of learners in the whole course.
According to the embodiment of the invention, the learning score is obtained by analyzing the concentration degree results of all the video frame sequences in the whole learning process, so that the detection of the whole concentration degree of the object to be detected is realized, and the completeness of the concentration degree detection is improved. And the learning score is analyzed and determined based on the result of smooth filtering of the concentration degree results of all the video frame sequences, so that the accuracy and robustness of the determined learning score are improved, and the influence caused by wrong identification of the video frame sequence result in a short time is avoided.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a concentration degree detection apparatus according to a third embodiment of the present invention, which is applicable to a case where the concentration degree of the user during learning is detected in real time. As shown in fig. 3, the apparatus includes:
a video frame sequence acquiring module 310, configured to acquire a video frame sequence of an object to be detected at regular time;
a concentration result determining module 320, configured to determine, based on a concentration detection model, an action type and a concentration result of the object to be detected according to the sequence of video frames;
the response reminding information generating module 330 is configured to generate response reminding information according to the action type and concentration result of the object to be detected, so as to feed back the response reminding information to the object to be detected;
wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result.
The method and the device for detecting the concentration degree of the video frame sequence based on the timing acquisition can realize real-time reminding of the concentration degree of the object to be detected and improve the experience of a user. And the concentration detection is carried out on the video frame sequence based on the concentration detection model, and the detection result is determined according to the context information between the continuous frames in the video frame sequence, so that the accuracy of the concentration detection is improved.
Optionally, the concentration detection model is constructed based on a multi-task concentration detection model;
the multi-task concentration detection model is generated by a sample video frame sequence, a sample action type, a sample concentration result and sample object position information.
Optionally, the concentration detection model is constructed based on a convolutional neural network structure.
Optionally, the response reminding information generating module 320 is specifically configured to:
and if the concentration result of the object to be detected is not concentrated, generating response reminding information comprising the action type of the object to be detected.
Optionally, the response reminding information generating module 320 is specifically configured to:
if the continuous N concentration degree results of the object to be detected are all concentrated, and the continuous concentration duration exceeds a threshold value, generating an overtime prompt.
Optionally, the apparatus further includes a score determining module for learning this time, and is specifically configured to:
performing smooth filtering processing on the concentration result of each video frame sequence learned at this time to obtain a processed concentration result;
and determining the learning score of the object to be detected according to the ratio of the concentration duration to the total learning duration in the processed concentration result.
The concentration degree detection device provided by the embodiment of the invention can execute the concentration degree detection method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects for executing the concentration degree detection method.
Example four
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in FIG. 4 is only one example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory device 28, and a bus 18 that couples various system components including the system memory device 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system storage 28 may include computer system readable media in the form of volatile storage, such as Random Access Memory (RAM)30 and/or cache storage 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Storage 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in storage 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system storage device 28, for example, to implement the concentration detection method provided by the embodiment of the present invention, including:
acquiring a video frame sequence of an object to be detected at fixed time;
determining the action type and concentration result of the object to be detected according to the video frame sequence based on a concentration detection model;
generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected;
wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result.
EXAMPLE five
The fifth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for detecting concentration provided in the fifth embodiment of the present invention, and the method includes:
acquiring a video frame sequence of an object to be detected at fixed time;
determining the action type and concentration result of the object to be detected according to the video frame sequence based on a concentration detection model;
generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected;
wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A concentration detection method, comprising:
acquiring a video frame sequence of an object to be detected at fixed time;
determining the action type and concentration result of the object to be detected according to the video frame sequence based on a concentration detection model;
generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected;
wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result.
2. The method of claim 1, wherein the concentration detection model is constructed based on a multitask concentration detection model;
the multi-task concentration detection model is generated by a sample video frame sequence, a sample action type, a sample concentration result and sample object position information.
3. The method of claim 1, wherein the concentration detection model is constructed based on a convolutional neural network structure.
4. The method according to claim 1, wherein generating response reminding information according to the action type and concentration result of the object to be detected comprises:
and if the concentration result of the object to be detected is not concentrated, generating response reminding information comprising the action type of the object to be detected.
5. The method according to claim 1, wherein generating response reminding information according to the action type and concentration result of the object to be detected comprises:
if the continuous N concentration degree results of the object to be detected are all concentrated, and the continuous concentration duration exceeds a threshold value, generating an overtime prompt.
6. The method of claim 1, further comprising, after determining the action type and concentration result of the object to be detected according to the sequence of video frames:
performing smooth filtering processing on the concentration result of each video frame sequence learned at this time to obtain a processed concentration result;
and determining the learning score of the object to be detected according to the ratio of the concentration duration to the total learning duration in the processed concentration result.
7. A concentration degree detection device, comprising:
the video frame sequence acquisition module is used for acquiring the video frame sequence of the object to be detected at fixed time;
the concentration result determining module is used for determining the action type and the concentration result of the object to be detected according to the video frame sequence based on a concentration detection model;
the response reminding information generating module is used for generating response reminding information according to the action type and the concentration result of the object to be detected so as to feed back the response reminding information to the object to be detected;
wherein the concentration detection model is generated from a sample video frame sequence, a sample action type, and a sample concentration result.
8. The apparatus of claim 7, wherein the concentration detection model is constructed based on a multitask concentration detection model;
the multi-task concentration detection model is generated by a sample video frame sequence, a sample action type, a sample concentration result and sample object position information.
9. A computer device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of concentration detection as recited in any of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the concentration detection method as claimed in any one of claims 1-6.
CN201911383377.9A 2019-12-28 2019-12-28 Concentration detection method, device, equipment and storage medium Active CN111144321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911383377.9A CN111144321B (en) 2019-12-28 2019-12-28 Concentration detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911383377.9A CN111144321B (en) 2019-12-28 2019-12-28 Concentration detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111144321A true CN111144321A (en) 2020-05-12
CN111144321B CN111144321B (en) 2023-06-09

Family

ID=70521449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911383377.9A Active CN111144321B (en) 2019-12-28 2019-12-28 Concentration detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111144321B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709362A (en) * 2020-06-16 2020-09-25 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for determining key learning content
CN112307920A (en) * 2020-10-22 2021-02-02 东云睿连(武汉)计算技术有限公司 High-risk work-type operator behavior early warning device and method
CN112801052A (en) * 2021-04-01 2021-05-14 北京百家视联科技有限公司 User concentration degree detection method and user concentration degree detection system
WO2022095386A1 (en) * 2020-11-05 2022-05-12 平安科技(深圳)有限公司 Online training evaluation method and apparatus, computer device and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067399A1 (en) * 2015-10-20 2017-04-27 阿里巴巴集团控股有限公司 Method and device for early warning based on image identification
CN107194371A (en) * 2017-06-14 2017-09-22 易视腾科技股份有限公司 The recognition methods of user's focus and system based on stratification convolutional neural networks
US20180242898A1 (en) * 2015-08-17 2018-08-30 Panasonic Intellectual Property Management Co., Ltd. Viewing state detection device, viewing state detection system and viewing state detection method
CN109409241A (en) * 2018-09-28 2019-03-01 百度在线网络技术(北京)有限公司 Video checking method, device, equipment and readable storage medium storing program for executing
CN109522815A (en) * 2018-10-26 2019-03-26 深圳博为教育科技有限公司 A kind of focus appraisal procedure, device and electronic equipment
CN109740446A (en) * 2018-12-14 2019-05-10 深圳壹账通智能科技有限公司 Classroom students ' behavior analysis method and device
US20190163981A1 (en) * 2017-11-28 2019-05-30 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for extracting video preview, device and computer storage medium
CN109961047A (en) * 2019-03-26 2019-07-02 北京儒博科技有限公司 Study measure of supervision, device, robot and the storage medium of educational robot
CN110119757A (en) * 2019-03-28 2019-08-13 北京奇艺世纪科技有限公司 Model training method, video category detection method, device, electronic equipment and computer-readable medium
CN110175501A (en) * 2019-03-28 2019-08-27 重庆电政信息科技有限公司 More people's scene focus recognition methods based on recognition of face
US20190278378A1 (en) * 2018-03-09 2019-09-12 Adobe Inc. Utilizing a touchpoint attribution attention neural network to identify significant touchpoints and measure touchpoint contribution in multichannel, multi-touch digital content campaigns
CN110334697A (en) * 2018-08-11 2019-10-15 昆山美卓智能科技有限公司 Intelligent table, monitoring system server and monitoring method with condition monitoring function
CN110366050A (en) * 2018-04-10 2019-10-22 北京搜狗科技发展有限公司 Processing method, device, electronic equipment and the storage medium of video data
WO2019218427A1 (en) * 2018-05-17 2019-11-21 深圳市鹰硕技术有限公司 Method and apparatus for detecting degree of attention based on comparison of behavior characteristics
US20190379938A1 (en) * 2018-06-07 2019-12-12 Realeyes Oü Computer-implemented system and method for determining attentiveness of user

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180242898A1 (en) * 2015-08-17 2018-08-30 Panasonic Intellectual Property Management Co., Ltd. Viewing state detection device, viewing state detection system and viewing state detection method
WO2017067399A1 (en) * 2015-10-20 2017-04-27 阿里巴巴集团控股有限公司 Method and device for early warning based on image identification
CN107194371A (en) * 2017-06-14 2017-09-22 易视腾科技股份有限公司 The recognition methods of user's focus and system based on stratification convolutional neural networks
US20190163981A1 (en) * 2017-11-28 2019-05-30 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for extracting video preview, device and computer storage medium
US20190278378A1 (en) * 2018-03-09 2019-09-12 Adobe Inc. Utilizing a touchpoint attribution attention neural network to identify significant touchpoints and measure touchpoint contribution in multichannel, multi-touch digital content campaigns
CN110366050A (en) * 2018-04-10 2019-10-22 北京搜狗科技发展有限公司 Processing method, device, electronic equipment and the storage medium of video data
WO2019218427A1 (en) * 2018-05-17 2019-11-21 深圳市鹰硕技术有限公司 Method and apparatus for detecting degree of attention based on comparison of behavior characteristics
US20190379938A1 (en) * 2018-06-07 2019-12-12 Realeyes Oü Computer-implemented system and method for determining attentiveness of user
CN110334697A (en) * 2018-08-11 2019-10-15 昆山美卓智能科技有限公司 Intelligent table, monitoring system server and monitoring method with condition monitoring function
CN109409241A (en) * 2018-09-28 2019-03-01 百度在线网络技术(北京)有限公司 Video checking method, device, equipment and readable storage medium storing program for executing
CN109522815A (en) * 2018-10-26 2019-03-26 深圳博为教育科技有限公司 A kind of focus appraisal procedure, device and electronic equipment
CN109740446A (en) * 2018-12-14 2019-05-10 深圳壹账通智能科技有限公司 Classroom students ' behavior analysis method and device
CN109961047A (en) * 2019-03-26 2019-07-02 北京儒博科技有限公司 Study measure of supervision, device, robot and the storage medium of educational robot
CN110175501A (en) * 2019-03-28 2019-08-27 重庆电政信息科技有限公司 More people's scene focus recognition methods based on recognition of face
CN110119757A (en) * 2019-03-28 2019-08-13 北京奇艺世纪科技有限公司 Model training method, video category detection method, device, electronic equipment and computer-readable medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘冀伟等: ""一种脑电信息监督人脸图像的注意力分析方法"", 《计算机工程与科学》 *
卢向群等: ""基于5G技术的教育信息化应用研究"", 《中国工程科学》 *
左国才等: ""基于深度学习人脸识别技术的课堂行为分析评测***研究"", 《智能计算机与应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709362A (en) * 2020-06-16 2020-09-25 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for determining key learning content
CN111709362B (en) * 2020-06-16 2023-08-08 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for determining important learning content
CN112307920A (en) * 2020-10-22 2021-02-02 东云睿连(武汉)计算技术有限公司 High-risk work-type operator behavior early warning device and method
CN112307920B (en) * 2020-10-22 2024-03-22 东云睿连(武汉)计算技术有限公司 High-risk worker behavior early warning device and method
WO2022095386A1 (en) * 2020-11-05 2022-05-12 平安科技(深圳)有限公司 Online training evaluation method and apparatus, computer device and storage medium
CN112801052A (en) * 2021-04-01 2021-05-14 北京百家视联科技有限公司 User concentration degree detection method and user concentration degree detection system
CN112801052B (en) * 2021-04-01 2021-08-31 北京百家视联科技有限公司 User concentration degree detection method and user concentration degree detection system

Also Published As

Publication number Publication date
CN111144321B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN111144321B (en) Concentration detection method, device, equipment and storage medium
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN108509941B (en) Emotion information generation method and device
CN108875785B (en) Attention degree detection method and device based on behavior feature comparison
CN109063587B (en) Data processing method, storage medium and electronic device
CN112801052B (en) User concentration degree detection method and user concentration degree detection system
CN111738041A (en) Video segmentation method, device, equipment and medium
CN108898115B (en) Data processing method, storage medium and electronic device
CN113743273B (en) Real-time rope skipping counting method, device and equipment based on video image target detection
CN113723530B (en) Intelligent psychological assessment system based on video analysis and electronic psychological sand table
CN110087143A (en) Method for processing video frequency and device, electronic equipment and computer readable storage medium
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
Ceneda et al. Show me your face: towards an automated method to provide timely guidance in visual analytics
CN109298783B (en) Mark monitoring method and device based on expression recognition and electronic equipment
CN113591678A (en) Classroom attention determination method, device, equipment, storage medium and program product
CN111915111A (en) Online classroom interaction quality evaluation method and device and terminal equipment
CN110363245B (en) Online classroom highlight screening method, device and system
CN110675361B (en) Method and device for establishing video detection model and video detection
CN112036328A (en) Bank customer satisfaction calculation method and device
CN111415283A (en) Factor analysis method and device for effective online teaching
CN111488846A (en) Method and equipment for identifying water level
CN116259104A (en) Intelligent dance action quality assessment method, device and system
CN113409822B (en) Object state determining method and device, storage medium and electronic device
CN115116136A (en) Abnormal behavior detection method, device and medium
CN115187437A (en) College teaching quality evaluation method and system based on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210816

Address after: Room 301-112, floor 3, building 2, No. 18, YANGFANGDIAN Road, Haidian District, Beijing 100089

Applicant after: Beijing Rubu Technology Co.,Ltd.

Address before: Room 508-598, Xitian Gezhuang Town Government Office Building, No. 8 Xitong Road, Miyun District Economic Development Zone, Beijing 101500

Applicant before: BEIJING ROOBO TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant