CN114063780A - Method and device for determining user concentration degree, VR equipment and storage medium - Google Patents

Method and device for determining user concentration degree, VR equipment and storage medium Download PDF

Info

Publication number
CN114063780A
CN114063780A CN202111370705.9A CN202111370705A CN114063780A CN 114063780 A CN114063780 A CN 114063780A CN 202111370705 A CN202111370705 A CN 202111370705A CN 114063780 A CN114063780 A CN 114063780A
Authority
CN
China
Prior art keywords
user
determining
space data
visual focus
focus space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111370705.9A
Other languages
Chinese (zh)
Inventor
凤阳
陈晓东
谷杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanzhou Lezhi Education Technology Co ltd
Original Assignee
Lanzhou Lezhi Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou Lezhi Education Technology Co ltd filed Critical Lanzhou Lezhi Education Technology Co ltd
Priority to CN202111370705.9A priority Critical patent/CN114063780A/en
Publication of CN114063780A publication Critical patent/CN114063780A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a method and a device for determining user concentration, VR equipment and a storage medium, wherein the method is applied to the VR equipment and comprises the following steps: in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service; in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time; determining an intersection region of the visual focus space data of the user and the virtual reality space, and determining the concentration degree of the user according to the area of the intersection region. The VR device determines the concentration degree of the user through the area of the intersection region of the visual focus space data of the user and the virtual reality space which the user should pay attention to, and can reduce the requirements on the time and the observation capacity of the server, the network and part of the user.

Description

Method and device for determining user concentration degree, VR equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of virtual reality, in particular to a method and a device for determining user concentration degree, VR equipment and a storage medium.
Background
Virtual reality, as the name implies, is the combination of virtual and real. Theoretically, virtual reality technology (VR) is a computer simulation system that can create and experience a virtual world, which uses a computer to create a simulated environment into which a user is immersed. The virtual reality technology is to combine electronic signals generated by computer technology with data in real life to convert the electronic signals into phenomena which can be felt by people, wherein the phenomena can be true and true objects in reality or substances which can not be seen by the naked eyes, and the phenomena are expressed by a three-dimensional model. These phenomena are called virtual reality because they are not directly visible but a real world simulated by computer technology.
In a virtual reality-based environment (such as a classroom environment, etc.), since all users (such as teachers, students, etc.) are in a 720-degree panoramic virtual reality scene, some users (such as teachers, etc.) cannot directly observe the situation (such as student class situation, etc.) of another part of users (such as students, etc.), so that the view of another part of users (such as students, etc.) is uploaded at present, and real-time video observation is performed, so as to determine the concentration (such as classroom concentration) of another part of users (such as students, etc.). The traditional method of uploading the visual field of another part of users (such as students and the like) and observing the video in real time puts high demands on the time and the observation capability of a server, a network and a part of users (such as teachers and the like).
Disclosure of Invention
In order to solve the technical problem that the conventional mode of uploading the view of another part of users and observing videos in real time puts high requirements on the time and the observation capacity of a server, a network and a part of users, the embodiment of the invention provides a method and a device for determining the concentration degree of the users, VR equipment and a storage medium.
In a first aspect of the embodiments of the present invention, a method for determining a user concentration degree is provided, where the method is applied to a VR device, and the method includes:
in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service;
in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time;
determining an intersection region of the visual focus space data of the user and the virtual reality space, and determining the concentration degree of the user according to the area of the intersection region.
In an optional embodiment, the determining, before the business is performed, a virtual reality space that a user should pay attention to includes:
determining a virtual reality space which a user should pay attention to before service is carried out, and marking the virtual reality space in a spherical mark mode;
the determining an intersection region of the visual focus space data of the user with the virtual reality space comprises:
determining an intersection region of the visual focus space data of the user with the spherical marker.
In an alternative embodiment, the determining an intersection area of the visual focus space data of the user and the spherical marker comprises:
determining the distance between the user and the spherical mark, and judging whether the distance is greater than a preset distance threshold value;
if the distance is greater than the preset distance threshold, determining an intersection area of the visual focus space data of the user and the spherical mark.
In an alternative embodiment, the determining an intersection area of the visual focus space data of the user and the spherical marker comprises:
determining a period according to a preset intersection region, and determining an intersection region of the visual focus space data of the user and the spherical mark;
the determining the concentration of the user according to the area of the intersection region comprises:
determining attention parameters corresponding to the visual focus space data of the user according to the area of the intersection region;
and determining the concentration degree of the user by utilizing the attention degree parameter corresponding to the visual focus space data of the user.
In an optional embodiment, the determining, according to the area of the intersection region, a degree of attention parameter corresponding to the visual focus space data of the user includes:
and determining an area range to which the area of the intersection region belongs, and searching for an attention parameter corresponding to the area range as an attention parameter corresponding to the visual focus space data of the user.
In an optional embodiment, the method further comprises:
if the distance is smaller than or equal to the preset distance threshold, determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
In an alternative embodiment, the determining the concentration of the user based on the intersection of the visual focus space data of the user with the spherical marker comprises:
determining attention parameters corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark;
and determining the concentration degree of the user by utilizing the attention degree parameter corresponding to the visual focus space data of the user.
In an optional embodiment, the determining, according to the intersection of the visual focus space data of the user and the spherical marker, a degree of attention parameter corresponding to the visual focus space data of the user includes:
determining the intersection condition of the visual focus space data of the user and the spherical mark according to a preset intersection determination period;
if the visual focus space data of the user and the spherical mark have an intersection point, determining a first attention parameter as an attention parameter corresponding to the visual focus space data of the user;
and if the visual focus space data of the user does not have an intersection point with the spherical mark, determining a second attention parameter as an attention parameter corresponding to the visual focus space data of the user.
In an optional embodiment, the determining the concentration of the user by using the attention parameter corresponding to the visual focus space data of the user includes:
intercepting the visual focus space data of the user for a target time period from the visual focus space data of the user;
dividing the visual focus space data of the user for the target time period into the visual focus space data of the user for N time periods;
searching a target attention parameter corresponding to the visual focus space data of the user in each time period from the attention parameter corresponding to the visual focus space data of the user;
determining the parameter number of the target attention parameter corresponding to the visual focus space data of the user in the target time period;
and determining the concentration degree of the user by using the target attention degree parameter and the parameter quantity.
In an optional embodiment, the determining the concentration of the user by using the target attention parameter and the number of parameters includes:
inputting the target attention parameter and the parameter quantity into a concentration determination model, and acquiring the concentration of the user output by the concentration determination model;
wherein the concentration determination model comprises:
X=Total[Average(X1,X2,……,XN/M;
the X comprises the user's concentration, the X1, X2, … …, XN each comprise the target concentration parameter, and the M comprises the number of parameters.
In a second aspect of the embodiments of the present invention, there is provided an apparatus for determining user attentiveness, which is applied to a VR device, and includes:
the space determining module is used for determining a virtual reality space which a user should pay attention to before business is carried out in a virtual reality-based business environment;
the data recording module is used for recording visual focus space data of the user in a virtual reality-based service environment in real time in the service process;
a concentration determination module for determining an intersection region of the visual focus space data of the user and the virtual reality space, and determining the concentration of the user according to an area of the intersection region.
In a third aspect of the embodiments of the present invention, there is further provided a VR device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor configured to implement the method for determining the user concentration according to the first aspect when executing the program stored in the memory.
In a fourth aspect of the embodiments of the present invention, there is also provided a storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method for determining the user concentration degree described in the above first aspect.
In a fifth aspect of embodiments of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method for determining user attentiveness as described in the first aspect above.
According to the technical scheme provided by the embodiment of the invention, in a virtual reality-based service environment, a virtual reality space which a user should pay attention to is determined before service is carried out, in the virtual reality-based service environment, visual focus space data of the user in the service carrying process is recorded in real time, an intersection area of the visual focus space data of the user and the virtual reality space is determined, and the concentration degree of the user is determined according to the area of the intersection area. The VR device determines the concentration degree of the user through the area of the intersection region of the visual focus space data of the user and the virtual reality space which the user should pay attention to, and can reduce the requirements on the time and the observation capacity of the server, the network and part of the user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a method for determining a user concentration degree according to an embodiment of the present invention;
fig. 2 is a schematic view of a scene marked in a virtual reality space in the form of a spherical marker in the embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating an implementation of another method for determining a user concentration degree according to an embodiment of the present invention;
fig. 4 is a schematic implementation flow chart of another method for determining the user concentration degree shown in the embodiment of the present invention;
FIG. 5 is a diagram illustrating a scenario of an intersection of visual focus space data of a student with a spherical marker in an embodiment of the present invention;
fig. 6 is a schematic view of a scenario that a VR device returns a concentration level of a student to a cloud in an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a user concentration determination device shown in the embodiment of the present invention;
fig. 8 is a schematic structural diagram of a VR device shown in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, an implementation flow diagram of a method for determining concentration of a user according to an embodiment of the present invention is provided, where the method is applied to a VR device (e.g., a VR headset), and specifically includes the following steps:
s101, in a virtual reality-based business environment, a virtual reality space which a user should pay attention to is determined before business is carried out.
In a virtual reality-based business environment, a business initiator can specify a business participant, namely a virtual reality space which a user should pay attention to before a business is performed, and some business models which the user should pay attention to are included in the virtual reality space. Therefore, the service participant, namely the VR equipment worn by the user, can determine the virtual reality space which the user should pay attention to before the service is carried out in the virtual reality-based service environment.
Further, in a virtual reality-based business environment, a business initiator may specify a virtual reality space that a business participant, i.e., a user, should pay attention to before a business proceeds, and specify that the virtual reality space is marked in the form of a spherical marker. Thus, a service participant, i.e., a VR device worn by a user, can determine, in a virtual reality-based service environment, a virtual reality space that the user should be interested in before the service proceeds, and mark the virtual reality space in the form of a spherical marker.
It should be noted that the form of the spherical mark is simple, and other types of marks, such as a cube mark, may also be used, and the embodiment of the present invention does not limit this.
For example, in a virtual reality-based classroom environment, a virtual reality space in which a student should be focused is specified by a teacher before a classroom is performed, and the virtual reality space is specified to be marked in the form of a spherical marker, as shown in fig. 2, in which a teaching model (not shown in the figure) exists. Therefore, VR equipment worn by students can determine a virtual reality space which a user should pay attention to before a classroom is carried out in a virtual reality-based classroom environment, and mark the virtual reality space in a spherical mark mode.
It should be noted that, for a service, for example, a classroom may be used, for example, a service such as a conference may be used, a corresponding service environment, for example, a classroom environment may be used, for example, a service environment such as a conference environment may be used, and a corresponding service model, for example, a teaching model may be used, for example, a service model such as a conference model may be used, which is not limited in this embodiment of the present invention.
And S102, recording the visual focus space data of the user in the service process in real time in the service environment based on the virtual reality.
In a virtual reality-based service environment, a service participant, namely VR equipment worn by a user, can record visual focus space data of the user in a service process in real time.
For example, in a virtual reality-based classroom environment, VR devices worn by students can record visual focus spatial data of the students in the course of a classroom in real-time.
S103, determining an intersection region of the visual focus space data of the user and the virtual reality space, and determining the concentration degree of the user according to the area of the intersection region.
For the visual focus space data of the user in the process of the real-time recorded business, the business participant, namely the VR device worn by the user, can determine the intersection region of the visual focus space data of the user and the virtual reality space which the user should pay attention to, thereby determining the concentration degree of the user according to the area of the intersection region, and evaluating the business effect with the concentration degree of the user, for example, evaluating the learning effect of the student with the concentration degree of the student.
In order to conveniently determine the concentration degree of the user, for the visual focus space data of the user in the real-time recorded service process, the service participant, namely the VR device worn by the user, can determine the intersection area of the visual focus space data of the user and the spherical mark, so that the concentration degree of the user is determined according to the area of the intersection area.
In addition, in the business process, often there is interactive phenomenon, for example in the classroom process, the teacher interacts with the student, before adjusting the student position to the teaching model, the student concentration degree is higher this moment, this kind of interactive phenomenon leads to the distance between student and the spherical mark different to distance between student and the spherical mark, according to the difference of the distance condition, select different concentration degree determine mode.
Based on the distance, the distance between the user and the spherical mark is determined, whether the distance is larger than a preset distance threshold value or not is judged, and if the distance is larger than the preset distance threshold value, an intersection area of the visual focus space data of the user and the spherical mark is determined. And under the condition that the distance is greater than a preset distance threshold, determining a period according to a preset intersection region, and determining an intersection region of the visual focus space data of the user and the spherical mark.
For example, in the embodiment of the present invention, the distance between the student and the spherical mark is determined, and whether the distance is greater than a preset distance threshold is determined, if the distance is greater than the preset distance threshold, it indicates that the teacher and the student do not interact with each other, at this time, the intersection area of the visual focus space data of the student and the spherical mark may be determined every 1 second, and the concentration degree of the student is determined according to the area of the intersection area, so as to evaluate the learning effect of the student.
It should be noted that, the area of the intersection region between the visual focus space data of the user and the spherical mark, that is, the area of the spherical mark that can be considered to be visible to the user, is not limited in the embodiment of the present invention.
In addition, in the embodiment of the present invention, as shown in fig. 3, an implementation flowchart of another method for determining concentration of a user according to the embodiment of the present invention is shown, where the method is applied to a VR device (e.g., a VR headset), and specifically includes the following steps:
s301, determining a focus degree parameter corresponding to the visual focus space data of the user according to the area of the intersection region.
In the embodiment of the invention, different attention parameters are set for different area ranges. For example, for area range 1: 1-2m2The corresponding attention parameter is 1, and for the area range 2: 1-2m2The corresponding attention parameter is 5.
Based on this, in the embodiment of the present invention, the area range to which the area of the intersection region belongs is determined, and the attention parameter corresponding to the area range is searched for as the attention parameter corresponding to the visual focus space data of the user, thereby completing the determination of the attention parameter corresponding to the visual focus space data of the user.
For example, in the embodiment of the present invention, an area range 1 to which the area of the intersection region belongs is determined, and the attention parameter 1 corresponding to the area range is searched for as the attention parameter corresponding to the visual focus space data of the user, thereby completing the determination of the attention parameter corresponding to the visual focus space data of the user.
S302, determining the concentration degree of the user by using the attention degree parameter corresponding to the visual focus space data of the user.
For the visual focus space data of the user in the real-time recorded service process, under the condition that the distance between the user and the spherical mark is larger than a preset distance threshold, a service participant, namely VR equipment worn by the user, determines a period according to a preset intersection region, determines the intersection region of the visual focus space data of the user and the spherical mark, determines an area range to which the area of the intersection region belongs, searches for an attention parameter corresponding to the area range as an attention parameter corresponding to the visual focus space data of the user, and can determine the attention of the user based on the attention parameter, namely determines the attention of the user by using the attention parameter corresponding to the visual focus space data of the user.
For example, for the visual focus spatial data of the student a during the classroom performance recorded in real time, when the distance between the student and the spherical mark is greater than the preset distance threshold, every 1 second, the VR device worn by the student a determines the intersection area of the visual focus spatial data of the student a and the spherical mark, determines the area range to which the area of the intersection area belongs, and finds the attention parameter corresponding to the area range as the attention parameter corresponding to the visual focus spatial data of the student a, as shown in table 1 below, so that the attention parameter corresponding to the visual focus spatial data of the student a can be utilized to determine the attention of the student a, that is, the classroom attention, to evaluate the learning effect of the student.
Visual focus spatial data for student A Attention parameter
Visual focus spatial data for student A at second 1 10
Study of second 2Raw A visual focus spatial data 5
Visual focus spatial data for student A at second 3 10
…… ……
TABLE 1
It should be noted that, for the visual focus space data of the user during the service process recorded in real time, a service participant, that is, a VR device worn by the user, determines a period according to a preset intersection region, determines an intersection region of the visual focus space data of the user and a spherical mark, determines an area range to which an area of the intersection region belongs, and finds an attention parameter corresponding to the area range as an attention parameter corresponding to the visual focus space data of the user, which means that corresponding attention parameters exist for the visual focus space data of the user for which the period is determined for each intersection region, as shown in table 1 above.
In addition, in order to reduce the amount of calculation and improve the efficiency of determining the concentration of the user, for the visual focus space data of the user in the real-time recorded service proceeding process, the service participant, i.e., the VR device worn by the user, may intercept the visual focus space data of the user in the target time period from the visual focus space data, so that the visual focus space data of the user in the target time period may be divided into the visual focus space data of the user in N time periods.
And searching a target attention parameter corresponding to the visual focus space data of the user in each time period from the attention parameters corresponding to the visual focus space data of the user in the real-time recorded service process, and determining the parameter number of the target attention parameter corresponding to the visual focus space data of the user in the target time period, so that the attention of the user is determined by using the target attention parameter and the parameter number. It should be noted that, if the visual focus space data of the user does not have a corresponding attention parameter, the determination that attention is not needed to be participated in is eliminated.
For example, for the visual focus space data of student a during the classroom progression recorded in real time, the VR device worn by student a may intercept the visual focus space data of student a in pastN seconds from it, so that the visual focus space data of student a in pastN seconds may be divided into N1-second visual focus space data of student a, i.e., the 1 st, 2 nd, … …, N nd, etc. of student a in pastN seconds.
From the attention parameters corresponding to the visual focus space data of the user during the service progress recorded in real time, as shown in table 1 above, the target attention parameters corresponding to the visual focus space data of student a in each of the 1 st, 2 nd, … … th, nth, and so on in the past N seconds are searched, and the number of parameters of the target attention parameters corresponding to the visual focus space data of student a in the past N seconds is determined, so that the attention of the user can be determined by using the target attention parameters and the number of parameters.
For the number of parameters of the target attention degree parameter corresponding to the visual focus space data of the user in the target time period and the target attention degree parameter corresponding to the visual focus space data of the user in each time period, the VR device worn by a business participant, that is, the user, may input the target attention degree parameter and the number of parameters into the attention degree determination model, and obtain the attention degree of the user output by the attention degree determination model, where the attention degree determination model includes:
X=Total[Average(X1,X2,……,XN/M;
the X comprises the user's concentration, the X1, X2, … …, XN each comprise the target concentration parameter, and the M comprises the number of parameters.
For example, for the parameter number M of the target attention degree parameter corresponding to the visual focus space data of student a in pastN seconds and the target attention degree parameter corresponding to the visual focus space data of student a in 1 st, 2 nd, … … th, nth seconds and the like in pastN seconds, the VR device worn by student a inputs the target attention degree parameter and the parameter number to the attention degree determination model, and acquires the attention degree of student a output by the attention degree determination model.
Note that X1 represents a target attention parameter corresponding to the visual focus space data of student a at the 1 st second of pastN seconds, X2 represents a target attention parameter corresponding to the visual focus space data of student a at the 2 nd second of pastN seconds, … … represents a target attention parameter corresponding to the visual focus space data of student a at the N th second of pastN seconds, XN.
In addition, if the distance between the user and the spherical mark is less than or equal to the preset distance threshold, it indicates that the user is closer to the virtual reality space to which the user should pay attention, and the service initiator and the service participant interact with each other, at this time, in order to save the calculation amount and speed up the determination speed of the concentration degree, the concentration degree of the user is determined according to the intersection condition of the visual focus space data of the user and the spherical mark, and the service effect is evaluated according to the concentration degree of the user, for example, the learning effect of the student is evaluated according to the concentration degree of the student.
Specifically, as shown in fig. 4, for another implementation flow diagram of the method for determining the concentration degree of the user according to the embodiment of the present invention, the method is applied to a VR device (e.g., a VR headset), and specifically may include the following steps:
s401, according to the intersection condition of the visual focus space data of the user and the spherical mark, determining an attention parameter corresponding to the visual focus space data of the user.
For the visual focus space data of the user in the real-time recorded service process, the service participant, namely the VR device worn by the user, can determine the attention parameter corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
Specifically, for the visual focus space data of the user during the service process recorded in real time, if the visual focus space data of the user and the spherical mark have an intersection, a service participant, that is, a VR device worn by the user, determines that the first attention parameter is the attention parameter corresponding to the visual focus space data of the user.
And for the visual focus space data of the user in the real-time recorded service process, if the intersection point does not exist between the visual focus space data of the user and the spherical mark, determining that the second attention parameter is the attention parameter corresponding to the visual focus space data of the user by a service participant, namely VR equipment worn by the user.
For example, for the visual focus space data of the student a during the classroom progression recorded in real time, if there is an intersection between the visual focus space data of the student a and the spherical marker, as shown in fig. 5, the VR device worn by the student a determines 1 as the attention parameter corresponding to the visual focus space data of the student a.
For example, for the visual focus space data of the student B during the classroom progression recorded in real time, if there is no intersection between the visual focus space data of the student B and the spherical marker, as shown in fig. 5, the VR device worn by the student B determines 0 as the attention parameter corresponding to the visual focus space data of the student B.
In addition, in the embodiment of the present invention, for the visual focus space data of the user during the service process recorded in real time, the service participant, that is, the VR device worn by the user, may determine the intersection condition of the visual focus space data of the user and the spherical mark according to a preset intersection determination period, and determine the attention parameter corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
For example, for the vision focus space data of the student a in the course of a classroom being recorded in real time, every 1 second of the VR device worn by the student a, the intersection condition of the vision focus space data of the student a and the spherical mark is determined, if the vision focus space data of the student a and the spherical mark have an intersection, 1 is determined as the attention parameter corresponding to the vision focus space data of the student a, and otherwise 0 is determined as the attention parameter corresponding to the vision focus space data of the student a.
It should be noted that, for the preset intersection determination period, for example, the interval may be 1 second, for example, the interval may be 0.5 second, and the like, which is not limited in this embodiment of the present invention. In addition, the first attention parameter and the second attention parameter may be set according to actual requirements, which is not limited in the embodiment of the present invention.
S402, determining the concentration degree of the user by using the attention degree parameter corresponding to the visual focus space data of the user.
For the visual focus space data of the user in the real-time recorded service process, the service participant, namely the VR device worn by the user, determines the attention degree parameter corresponding to the visual focus space data of the user according to a preset intersection determination period, and can determine the attention degree of the user based on the attention degree parameter, namely the attention degree parameter corresponding to the visual focus space data of the user is utilized to determine the attention degree of the user.
For example, for the visual focus space data of the student a during the classroom progression recorded in real time, the attention degree parameter corresponding to the visual focus space data of the student a is determined every 1 second by the VR device worn by the student a, as shown in table 2 below, so that the attention degree parameter corresponding to the visual focus space data of the student a can be used to determine the attention degree of the student a, that is, the classroom attention degree.
Figure BDA0003361982440000131
Figure BDA0003361982440000141
TABLE 2
It should be noted that, for the visual focus space data of the user during the service process recorded in real time, the service participant, that is, the VR device worn by the user, determines the attention parameter corresponding to the visual focus space data of the user according to the preset intersection determination period, which means that for the visual focus space data of the user in each intersection determination period, there is a corresponding attention parameter, as shown in table 2 above.
In addition, in order to reduce the amount of calculation and improve the efficiency of determining the concentration of the user, for the visual focus space data of the user in the real-time recorded service proceeding process, the service participant, i.e., the VR device worn by the user, may intercept the visual focus space data of the user in the target time period from the visual focus space data, so that the visual focus space data of the user in the target time period may be divided into the visual focus space data of the user in N time periods.
And searching a target attention parameter corresponding to the visual focus space data of the user in each time period from the attention parameters corresponding to the visual focus space data of the user in the real-time recorded service process, and determining the parameter number of the target attention parameter corresponding to the visual focus space data of the user in the target time period, so that the attention of the user is determined by using the target attention parameter and the parameter number. It should be noted that, if the visual focus space data of the user does not have a corresponding attention parameter, the determination that attention is not needed to be participated in is eliminated.
For example, for the visual focus space data of student a during the classroom progression recorded in real time, the VR device worn by student a may intercept the visual focus space data of student a in pastN seconds from it, so that the visual focus space data of student a in pastN seconds may be divided into N1-second visual focus space data of student a, i.e., the 1 st, 2 nd, … …, N nd, etc. of student a in pastN seconds.
From the attention parameters corresponding to the visual focus space data of the user during the service progress recorded in real time, as shown in table 2, the target attention parameters corresponding to the visual focus space data of student a in each of the 1 st, 2 nd, … … th, nth, and so on in the past N seconds are searched, and the number of parameters of the target attention parameters corresponding to the visual focus space data of student a in the past N seconds is determined, so that the attention of the user can be determined by using the target attention parameters and the number of parameters.
For the number of parameters of the target attention degree parameter corresponding to the visual focus space data of the user in the target time period and the target attention degree parameter corresponding to the visual focus space data of the user in each time period, the VR device worn by a business participant, that is, the user, may input the target attention degree parameter and the number of parameters into the attention degree determination model, and obtain the attention degree of the user output by the attention degree determination model, where the attention degree determination model includes:
X=Total[Average(X1,X2,……,XN/M;
the X comprises the user's concentration, the X1, X2, … …, XN each comprise the target concentration parameter, and the M comprises the number of parameters.
For example, for the parameter number M of the target attention degree parameter corresponding to the visual focus space data of student a in pastN seconds and the target attention degree parameter corresponding to the visual focus space data of student a in 1 st, 2 nd, … … th, nth seconds and the like in pastN seconds, the VR device worn by student a inputs the target attention degree parameter and the parameter number to the attention degree determination model, and acquires the attention degree of student a output by the attention degree determination model.
Note that X1 represents a target attention parameter corresponding to the visual focus space data of student a at the 1 st second of pastN seconds, X2 represents a target attention parameter corresponding to the visual focus space data of student a at the 2 nd second of pastN seconds, … … represents a target attention parameter corresponding to the visual focus space data of student a at the N th second of pastN seconds, XN.
In addition, different attention degree intervals are preset in the embodiment of the invention, and each attention degree interval has a corresponding attention degree grade. For example, different attention intervals are set in advance: concentration degree sections 1, 2, and 3, and each concentration degree section has a corresponding concentration degree rank, as shown in table 3 below.
Concentration interval Concentration interval range Concentration level
Concentration interval 1 90%~100% Is excellent in
Concentration degree interval 2 60%~90% Qualified
Concentration degree interval 3 0%~60% Fail to be qualified
TABLE 3
For the concentration degree of the user, the service participant, namely the VR equipment worn by the user, can search the concentration degree interval corresponding to the concentration degree, and determine the target concentration degree grade corresponding to the concentration degree interval as the concentration degree grade of the user. For example, for the 92% concentration degree of student a, the VR device worn by student a can search for the concentration degree interval (concentration degree interval 1) corresponding to the concentration degree, and can determine the concentration degree grade (excellence) of student a.
To the degree of concentration level of user, the VR equipment that the business participant was worn by the user can be sent to the high in the clouds with it, so that the degree of concentration level of the user that corresponds can be sent to the high in the clouds with the VR equipment that a plurality of business participants were worn by the user, the high in the clouds is according to the degree of concentration level of the user, and the user quantity that each degree of concentration level corresponds is counted to according to the user quantity that each degree of concentration level corresponds, the quality of aassessment business.
For example, the VR device that student 1 dressed sends student 1's concentration degree grade to the cloud, and the VR device that student 2 dressed sends student 2's concentration degree grade to the cloud, … …, and the VR device that student N dressed sends student N's concentration degree grade to the cloud, as shown in fig. 6, thereby the cloud can be according to each student's concentration degree grade, and the statistics is every to be concentrated in the corresponding student quantity of degree grade, as shown in table 4 below, thereby can be according to every concentration degree grade corresponding the user quantity, the quality of aassessment classroom.
Concentration level Number of students
Is excellent in 30
Qualified 3
Fail to be qualified 2
TABLE 4
It should be noted that, for the concentration level (excellent), the number of students is higher, which indicates that the classroom quality is higher, and indirectly indicates that the classroom of the teacher is more popular with the students.
Through the above description of the technical solution provided by the embodiment of the present invention, in a virtual reality-based service environment, a virtual reality space that a user should pay attention to before a service is performed is determined, in the virtual reality-based service environment, visual focus space data of the user during the service is recorded in real time, an intersection region of the visual focus space data of the user and the virtual reality space is determined, and the concentration degree of the user is determined according to the area of the intersection region.
The VR equipment determines the concentration degree of the user through the area of the intersection region of the visual focus space data of the user and the virtual reality space which the user should pay attention to, so that the concentration degree of the user is determined through the VR equipment, the electric quantity of the VR equipment is consumed in a small scale, and the requirements on the time and the observation capacity of a server, a network and a part of users can be reduced. In addition, the concentration level of the user can be returned to the cloud end so as to evaluate the quality of the service.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides a device for determining user attentiveness, where as shown in fig. 7, the device is applied to a VR device, and may include: a space determination module 710, a data logging module 720, a concentration determination module 730.
A space determining module 710, configured to determine, in a virtual reality-based service environment, a virtual reality space that a user should pay attention to before a service is performed;
a data recording module 720, configured to record, in real time, visual focus spatial data of the user during a service process in a virtual reality-based service environment;
a concentration determination module 730, configured to determine an intersection region of the visual focus space data of the user and the virtual reality space, and determine the concentration of the user according to an area of the intersection region.
An embodiment of the present invention further provides a VR device, as shown in fig. 8, including a processor 81, a communication interface 82, a memory 83, and a communication bus 84, where the processor 81, the communication interface 82, and the memory 83 complete mutual communication through the communication bus 84,
a memory 83 for storing a computer program;
the processor 81 is configured to implement the following steps when executing the program stored in the memory 83:
in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service; in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time; determining an intersection region of the visual focus space data of the user and the virtual reality space, and determining the concentration degree of the user according to the area of the intersection region.
The communication bus mentioned in the VR device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the VR device and other devices.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to perform the method for determining the user concentration as described in any of the above embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of determining user concentration as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a storage medium or transmitted from one storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (13)

1. A method for determining user attentiveness, applied to a VR device, the method comprising:
in a virtual reality-based service environment, determining a virtual reality space which a user should pay attention to before service;
in a virtual reality-based service environment, recording visual focus space data of the user in a service process in real time;
determining an intersection region of the visual focus space data of the user and the virtual reality space, and determining the concentration degree of the user according to the area of the intersection region.
2. The method of claim 1, wherein determining the virtual reality space that the user should focus on before the business is performed comprises:
determining a virtual reality space which a user should pay attention to before service is carried out, and marking the virtual reality space in a spherical mark mode;
the determining an intersection region of the visual focus space data of the user with the virtual reality space comprises:
determining an intersection region of the visual focus space data of the user with the spherical marker.
3. The method of claim 2, wherein the determining an intersection area of the visual focus space data of the user with the spherical marker comprises:
determining the distance between the user and the spherical mark, and judging whether the distance is greater than a preset distance threshold value;
if the distance is greater than the preset distance threshold, determining an intersection area of the visual focus space data of the user and the spherical mark.
4. The method of claim 2 or 3, wherein the determining an intersection area of the visual focus space data of the user with the spherical marker comprises:
determining a period according to a preset intersection region, and determining an intersection region of the visual focus space data of the user and the spherical mark;
the determining the concentration of the user according to the area of the intersection region comprises:
determining attention parameters corresponding to the visual focus space data of the user according to the area of the intersection region;
and determining the concentration degree of the user by utilizing the attention degree parameter corresponding to the visual focus space data of the user.
5. The method of claim 4, wherein determining the attention parameter corresponding to the visual focus space data of the user according to the area of the intersection region comprises:
and determining an area range to which the area of the intersection region belongs, and searching for an attention parameter corresponding to the area range as an attention parameter corresponding to the visual focus space data of the user.
6. The method of claim 3, further comprising:
if the distance is smaller than or equal to the preset distance threshold, determining the concentration degree of the user according to the intersection condition of the visual focus space data of the user and the spherical mark.
7. The method of claim 6, wherein determining the user's concentration from the intersection of the visual focus space data of the user with the spherical marker comprises:
determining attention parameters corresponding to the visual focus space data of the user according to the intersection condition of the visual focus space data of the user and the spherical mark;
and determining the concentration degree of the user by utilizing the attention degree parameter corresponding to the visual focus space data of the user.
8. The method of claim 7, wherein determining the attention parameter corresponding to the visual focus space data of the user according to the intersection of the visual focus space data of the user and the spherical marker comprises:
determining the intersection condition of the visual focus space data of the user and the spherical mark according to a preset intersection determination period;
if the visual focus space data of the user and the spherical mark have an intersection point, determining a first attention parameter as an attention parameter corresponding to the visual focus space data of the user;
and if the visual focus space data of the user does not have an intersection point with the spherical mark, determining a second attention parameter as an attention parameter corresponding to the visual focus space data of the user.
9. The method according to claim 4 or 7, wherein the determining the user's concentration using the attention parameter corresponding to the visual focus space data of the user comprises:
intercepting the visual focus space data of the user for a target time period from the visual focus space data of the user;
dividing the visual focus space data of the user for the target time period into the visual focus space data of the user for N time periods;
searching a target attention parameter corresponding to the visual focus space data of the user in each time period from the attention parameter corresponding to the visual focus space data of the user;
determining the parameter number of the target attention parameter corresponding to the visual focus space data of the user in the target time period;
and determining the concentration degree of the user by using the target attention degree parameter and the parameter quantity.
10. The method of claim 9, wherein determining the user's concentration using the target attention parameter and the number of parameters comprises:
inputting the target attention parameter and the parameter quantity into a concentration determination model, and acquiring the concentration of the user output by the concentration determination model;
wherein the concentration determination model comprises:
X=Total[Average(X1,X2,……,XN)]/M;
the X comprises the user's concentration, the X1, X2, … …, XN each comprise the target concentration parameter, and the M comprises the number of parameters.
11. An apparatus for determining concentration of a user, applied to a VR device, the apparatus comprising:
the space determining module is used for determining a virtual reality space which a user should pay attention to before business is carried out in a virtual reality-based business environment;
the data recording module is used for recording visual focus space data of the user in a virtual reality-based service environment in real time in the service process;
a concentration determination module for determining an intersection region of the visual focus space data of the user and the virtual reality space, and determining the concentration of the user according to an area of the intersection region.
12. The VR device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 10 when executing a program stored on a memory.
13. A storage medium on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN202111370705.9A 2021-11-18 2021-11-18 Method and device for determining user concentration degree, VR equipment and storage medium Pending CN114063780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111370705.9A CN114063780A (en) 2021-11-18 2021-11-18 Method and device for determining user concentration degree, VR equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111370705.9A CN114063780A (en) 2021-11-18 2021-11-18 Method and device for determining user concentration degree, VR equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114063780A true CN114063780A (en) 2022-02-18

Family

ID=80279241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111370705.9A Pending CN114063780A (en) 2021-11-18 2021-11-18 Method and device for determining user concentration degree, VR equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114063780A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872575A (en) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 Live broadcasting method and apparatus based on virtual reality
CN108227928A (en) * 2018-01-10 2018-06-29 三星电子(中国)研发中心 Pick-up method and device in a kind of virtual reality scenario
CN110075519A (en) * 2019-05-06 2019-08-02 网易(杭州)网络有限公司 Information processing method and device, storage medium and electronic equipment in virtual reality
CN110464365A (en) * 2018-05-10 2019-11-19 深圳先进技术研究院 A kind of attention rate determines method, apparatus, equipment and storage medium
CN111241385A (en) * 2018-11-29 2020-06-05 北京京东尚科信息技术有限公司 Information processing method, information processing apparatus, computer system, and medium
CN113289331A (en) * 2021-06-09 2021-08-24 腾讯科技(深圳)有限公司 Display method and device of virtual prop, electronic equipment and storage medium
CN113506027A (en) * 2021-07-27 2021-10-15 北京工商大学 Course quality assessment and improvement method based on student visual attention and teacher behavior
CN113641246A (en) * 2021-08-25 2021-11-12 兰州乐智教育科技有限责任公司 Method and device for determining user concentration degree, VR equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872575A (en) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 Live broadcasting method and apparatus based on virtual reality
CN108227928A (en) * 2018-01-10 2018-06-29 三星电子(中国)研发中心 Pick-up method and device in a kind of virtual reality scenario
CN110464365A (en) * 2018-05-10 2019-11-19 深圳先进技术研究院 A kind of attention rate determines method, apparatus, equipment and storage medium
CN111241385A (en) * 2018-11-29 2020-06-05 北京京东尚科信息技术有限公司 Information processing method, information processing apparatus, computer system, and medium
CN110075519A (en) * 2019-05-06 2019-08-02 网易(杭州)网络有限公司 Information processing method and device, storage medium and electronic equipment in virtual reality
CN113289331A (en) * 2021-06-09 2021-08-24 腾讯科技(深圳)有限公司 Display method and device of virtual prop, electronic equipment and storage medium
CN113506027A (en) * 2021-07-27 2021-10-15 北京工商大学 Course quality assessment and improvement method based on student visual attention and teacher behavior
CN113641246A (en) * 2021-08-25 2021-11-12 兰州乐智教育科技有限责任公司 Method and device for determining user concentration degree, VR equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109241425B (en) Resource recommendation method, device, equipment and storage medium
CN108460627A (en) Marketing activity scheme method for pushing, device, computer equipment and storage medium
CN107170308A (en) Classroom question and answer management method and system
CN110659311B (en) Topic pushing method and device, electronic equipment and storage medium
CN108122437A (en) Adaptive learning method and device
CN108322317A (en) A kind of account identification correlating method and server
CN108563749A (en) On-line education system resource recommendation method based on various dimensions information and knowledge network
CN115660909B (en) Digital school platform immersion type digital learning method and system
CN108419137A (en) Data processing method and data processing equipment
Tsoni et al. From Analytics to Cognition: Expanding the Reach of Data in Learning.
CN114021029A (en) Test question recommendation method and device
Sharadga et al. Journalists’ perceptions towards employing artificial intelligence techniques in Jordan TV’s newsrooms
CN113641246A (en) Method and device for determining user concentration degree, VR equipment and storage medium
WO2021135322A1 (en) Automatic question setting method, apparatus and system
CN109451332B (en) User attribute marking method and device, computer equipment and medium
CN114063780A (en) Method and device for determining user concentration degree, VR equipment and storage medium
US20160203724A1 (en) Social Classroom Integration And Content Management
JP2005018212A (en) Method and system for collecting information for grasping user's reaction to information contents on network
CN115617969A (en) Session recommendation method, device, equipment and computer storage medium
CN108053193A (en) Educational information is analyzed and querying method and system
CN108491547A (en) A kind of internet teaching system based on big data
CN111327943B (en) Information management method, device, system, computer equipment and storage medium
CN112651764B (en) Target user identification method, device, equipment and storage medium
CN109413459B (en) User recommendation method and related equipment in live broadcast platform
CN113673811A (en) Session-based online learning performance evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination