CN111258433A - Teaching interactive system based on virtual scene - Google Patents

Teaching interactive system based on virtual scene Download PDF

Info

Publication number
CN111258433A
CN111258433A CN202010137104.2A CN202010137104A CN111258433A CN 111258433 A CN111258433 A CN 111258433A CN 202010137104 A CN202010137104 A CN 202010137104A CN 111258433 A CN111258433 A CN 111258433A
Authority
CN
China
Prior art keywords
teaching
data
scene
virtual scene
submodule
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010137104.2A
Other languages
Chinese (zh)
Other versions
CN111258433B (en
Inventor
王鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yixue Education Technology Co Ltd
Original Assignee
Shanghai Yixue Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yixue Education Technology Co Ltd filed Critical Shanghai Yixue Education Technology Co Ltd
Priority to CN202010137104.2A priority Critical patent/CN111258433B/en
Publication of CN111258433A publication Critical patent/CN111258433A/en
Application granted granted Critical
Publication of CN111258433B publication Critical patent/CN111258433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a teaching interactive system based on a virtual scene, which constructs prepared data based on a scene obtained by preprocessing teaching related data to construct teaching virtual scenes in different modes, and adjusts the adaptive operating state and/or operating parameters of the teaching virtual scene aiming at the state change data of teachers and/or students in the operating process of the teaching virtual scene, so that the adaptive teaching scene can be switched according to different teaching contents and teaching requirements, the interactivity and scene variability in the teaching process are improved, and the virtual scene technology is fully fused and applied to teaching, thereby improving the teaching efficiency and the teaching interest.

Description

Teaching interactive system based on virtual scene
Technical Field
The invention relates to the technical field of intelligent interactive teaching, in particular to a teaching interactive system based on a virtual scene.
Background
At present, intelligent teaching is the main development direction of teaching mode, can satisfy different teachers or student's needs through intelligent teaching to corresponding course teaching is implemented anytime and anywhere still can be relied on the mode of on-line teaching to intelligent teaching, and this has greatly improved intelligent teaching to different user's suitability and flexibility in time and place. However, the intelligent teaching mode in the prior art is limited to unidirectional course teaching, which cannot realize teaching interaction between teachers and students, and the intelligent teaching mode cannot switch adaptive teaching scenes according to different teaching contents and teaching requirements, which seriously affects the interactivity and scene variability of intelligent teaching; in addition, the existing teaching mode can not fully fuse and apply the virtual scene technology to the actual teaching, and the teaching efficiency and the teaching interest of the interactive teaching can not be improved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a teaching interactive system based on a virtual scene, which comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjusting module; the actual teaching data acquisition module is used for acquiring teaching related data about a teacher object and/or a student object in a history teaching process; the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data; the teaching virtual scene construction module is used for constructing preparation data according to the scene and constructing teaching virtual scenes in different modes; the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the operation process of the teaching virtual scene; the virtual scene adjusting module is used for adjusting the adaptive running state and/or running parameters of the current teaching virtual scene according to the object state change data; therefore, the teaching interactive system based on the virtual scene constructs the prepared data based on the scene obtained by preprocessing teaching related data, constructs the teaching virtual scenes with different modes, and adjusts the adaptive operating state and/or operating parameters of the teaching virtual scene aiming at the state change data of teachers and/or students in the operation process of the teaching virtual scene, so that the adaptive teaching scene switching can be performed according to different teaching contents and teaching requirements, the interactivity and scene variability in the teaching process are improved, and the virtual scene technology is fully fused and applied to teaching, so that the teaching efficiency and the teaching interestingness are improved.
The invention provides a teaching interactive system based on a virtual scene, which is characterized in that:
the teaching interaction system based on the virtual scene comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjusting module; wherein the content of the first and second substances,
the actual teaching data acquisition module is used for acquiring teaching related data about a teacher object and/or a student object in a history teaching process;
the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data;
the teaching virtual scene construction module is used for constructing preparation data according to the scene and constructing teaching virtual scenes in different modes;
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the operation process of the teaching virtual scene;
the virtual scene adjusting module is used for adjusting the adaptive running state and/or running parameters of the current teaching virtual scene according to the object state change data;
further, the actual teaching data acquisition module comprises a teaching objective data acquisition submodule, a teacher object related data acquisition submodule and a student object related data acquisition submodule; wherein the content of the first and second substances,
the teaching objective data acquisition submodule is used for acquiring corresponding teaching environment data and/or teaching knowledge content data in different historical teaching stages to serve as part of the teaching related data;
the teacher object related data acquisition submodule is used for acquiring teaching state data of a teacher object in different historical teaching stages to serve as part of the teaching related data;
the student object related data acquisition submodule is used for acquiring learning state data of the student object in different historical teaching stages to serve as part of the teaching related data;
furthermore, the actual teaching data acquisition module also comprises a historical teaching stage decomposition sub-module, a teacher object determination sub-module and a student object determination sub-module; wherein the content of the first and second substances,
the historical teaching stage decomposition submodule is used for decomposing the historical teaching process according to a preset teaching progress and/or preset teaching course setting so as to obtain corresponding different historical teaching stages;
the teacher object determining submodule is used for determining a teacher object corresponding to the teacher object related data acquisition submodule according to preset teaching requirements;
the student object determining submodule is used for acquiring student objects corresponding to the sub-modules according to the relevant data of the student objects;
further, the teaching data processing module comprises a data attribute identification submodule, a data classification submodule, a data extraction submodule and a data transformation submodule; wherein the content of the first and second substances,
the data attribute identification submodule is used for carrying out data attribute identification processing on the teaching related data in a data storage form and/or data content so as to obtain attribute information on the teaching related data;
the data classification submodule is used for classifying the teaching related data according to the attribute information so as to obtain teaching related data sets related to different data storage forms and/or different data contents;
the data extraction submodule is used for extracting data validity from the teaching related data set so as to obtain an effective teaching related data set meeting preset validity conditions;
the data transformation submodule is used for carrying out transformation processing on the matching of teaching scenes on the effective teaching related data set so as to obtain corresponding scene construction preparation data;
further, the data extraction submodule comprises a data confidence coefficient calculation unit, a confidence coefficient evaluation unit and a data extraction execution unit; wherein the content of the first and second substances,
the data confidence degree calculation unit is used for calculating an actual data confidence degree value corresponding to the teaching related data set;
the confidence evaluation unit is used for comparing and evaluating the actual data confidence value and an expected data confidence range so as to determine the data validity of the teaching related data set;
the data extraction execution unit is used for executing the extraction processing according to the data validity so as to obtain the effective teaching related data set meeting the preset validity condition;
alternatively, the first and second electrodes may be,
the data transformation submodule comprises a teaching scene matching degree calculation unit and a data transformation execution unit; wherein the content of the first and second substances,
the teaching scene matching degree calculating unit is used for calculating a teaching scene matching degree value corresponding to the effective teaching related data set;
the data transformation executing unit is used for executing the transformation processing according to the teaching scene matching value so as to obtain corresponding scene construction preparation data;
further, the teaching virtual scene construction module comprises a teaching virtual scene deep learning neural network sub-module, a teaching virtual sub-scene matching sub-module, a teaching virtual sub-scene splicing sub-module and a teaching virtual scene pre-judging sub-module; wherein the content of the first and second substances,
the teaching virtual scene deep learning neural network sub-module is used for analyzing and processing the scene construction preparation data through a preset teaching virtual scene deep learning neural network model so as to obtain a plurality of teaching virtual sub-scenes;
the teaching virtual sub-scene matching sub-module is used for calculating a first matching value between different teaching virtual sub-scenes and a second matching value between each teaching virtual sub-scene and different preset scene modes;
the teaching virtual sub-scene splicing sub-module is used for splicing different teaching virtual sub-scenes according to the first matching value and/or the second matching value so as to obtain corresponding teaching virtual scenes;
the teaching virtual scene pre-judging submodule is used for pre-judging the scene applicability of the teaching virtual scene so as to determine an applicability ordered list of different teaching virtual scenes;
further, the teaching virtual sub-scene splicing sub-module comprises a sub-scene classifying unit and a sub-scene splicing execution unit; wherein the content of the first and second substances,
the sub-scene classifying unit is used for classifying the different teaching virtual sub-scenes according to the first matching value and/or the second matching value to form a plurality of teaching virtual sub-scene sets with splicing feasibility;
the sub-scene splicing execution unit is used for splicing different teaching virtual sub-scenes in the teaching virtual sub-scene set to obtain corresponding teaching virtual scenes;
further, the virtual scene operation monitoring module comprises an external environment state change determining submodule, a teacher object state change determining submodule and a student object state change determining submodule; wherein the content of the first and second substances,
the external environment state change determining submodule is used for determining external environment state change data corresponding to the teaching virtual scene in the operation process;
the teacher object state change determining submodule is used for determining corresponding teacher object state change data of the teaching virtual scene in the operation process;
the student object state change determining submodule is used for determining corresponding student object state change data of the teaching virtual scene in the operation process;
further, the external environment state change determining submodule comprises an external environment sound data determining unit, an external environment illumination data determining unit and an external environment temperature data determining unit; wherein the content of the first and second substances,
the external environment sound data determining unit is used for determining external environment sound change data corresponding to the teaching virtual scene in the operation process;
the external environment illumination data determining unit is used for determining external environment illumination change data corresponding to the teaching virtual scene in the operation process;
the external environment temperature data determining unit is used for determining external environment temperature change data corresponding to the teaching virtual scene in the operation process;
alternatively, the first and second electrodes may be,
the teacher object state change determining submodule comprises a teacher object sound data determining unit and a teacher object limb action data determining unit; wherein the content of the first and second substances,
the teacher object sound data determining unit is used for determining teaching sound data of a teacher object in the operation process of the teaching virtual scene;
the teacher object limb action data determining unit is used for determining teaching limb action data of the teacher object in the operation process of the teaching virtual scene;
alternatively, the first and second electrodes may be,
the student object state change determining submodule comprises a student object face data determining unit and a student object limb action data determining unit; wherein the content of the first and second substances,
the student object face data determining unit is used for determining face expression data of the student object in the operation process of the teaching virtual scene;
the student object limb action data determining unit is used for determining the class attending limb action data of the student object in the operation process of the teaching virtual scene;
further, the virtual scene adjusting module comprises a virtual scene atmosphere adjusting submodule, a virtual scene three-dimensional space adjusting submodule and a virtual scene dynamic progress adjusting submodule; wherein the content of the first and second substances,
the virtual scene atmosphere adjusting submodule is used for adjusting scene operation sound and/or scene operation illumination of the current teaching virtual scene according to the object state change data;
the virtual scene three-dimensional space adjusting submodule is used for adjusting the depth of field and/or the fusion of a scene operation three-dimensional space of the current teaching virtual scene according to the object state change data;
and the virtual scene dynamic progress adjusting submodule is used for adjusting the scene operation dynamic progress of the current teaching virtual scene according to the object state change data.
Compared with the prior art, the teaching interactive system based on the virtual scene constructs the prepared data based on the scene obtained by preprocessing teaching related data, constructs the teaching virtual scenes with different modes, and adjusts the adaptive running state and/or running parameters of the teaching virtual scene aiming at the state change data of teachers and/or students in the running process of the teaching virtual scene, so that the adaptive teaching scene switching can be carried out according to different teaching contents and teaching requirements, the interactivity and scene variability in the teaching process are improved, and the virtual scene technology is fully fused and applied to the teaching, thereby improving the teaching efficiency and the teaching interestingness.
Further, the virtual scene based instructional interactive system of claim 1; wherein the content of the first and second substances,
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the operation process of the teaching virtual scene; the method further comprises the steps of accurately combining the standardized teaching virtual scenes according to the difference of the difficulty of each knowledge point, and executing the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene according to the change data of the facial expressions of the student objects during the running period of the teaching virtual scenes, wherein the specific implementation steps are as follows:
step A1, constructing preparation data according to scenes obtained by preprocessing the teaching related data, and carrying out preliminary statistical classification processing according to characteristic parameters of disciplines, grades and difficulty grades of knowledge points to obtain a standardized teaching virtual scene database;
step A2, combining the characteristic parameters of the standardized teaching virtual scene database obtained in the step A1, and obtaining a teaching virtual scene set through normalization processing of a formula (1);
Figure BDA0002397721720000071
wherein e is a natural constant, and n represents the standardized teaching virtual fieldThe total number of the disciplines in the scene database, m represents the total number of the grades in the standardized teaching virtual scene database, x represents a number value (the number value is an integer 0,1,2,3, … N), corresponding to a certain discipline, y represents a number value corresponding to a certain grade, z represents a knowledge point number value, and S represents the number value of the knowledge pointxIndicates a certain subject S, G with a number xyIndicating a certain year G, L with a number yzIndicating a certain knowledge point L with a number value z,
Figure BDA0002397721720000072
representing the normalization of said characteristic parameters and the random combination, Vir (S)x,Gy,Lz) Representing the acquired teaching virtual scene set;
a3, in the operation process of a teaching virtual scene, acquiring facial expression state change data of a student object according to a formula (2), and performing kernel function assignment processing to acquire a facial expression standard value set of the student object;
Figure BDA0002397721720000081
wherein pi is a circumferential rate, exp is an exponential function with a natural constant e as a base, sin and cos are respectively a sine function and a cosine function, K represents the number of image pixels of effective areas of a face, such as eyelids, lip angles, forehead and the like, in the real-time image collected by the scene operation object monitoring module, r represents the value of the diagonal line of each pixel, i represents the number value of the transverse coordinates of each collected pixel, j represents the number value of the longitudinal coordinates of each collected pixel, A represents the number value of the longitudinal coordinates of each collected pixel, and0representing the curve length transverse space vector value B corresponding to the pixel point when the transverse coordinate number value of the pixel point is 0 and the horizontal coordinate number value of the pixel point is 0 when the lower right corner of the facial expression image is taken as a reference point and extends leftwards0The expression takes the lower right corner of the facial expression image as a reference point, and when the image extends upwards, the longitudinal coordinate number value of the pixel point is 0, the corresponding curve length longitudinal space vector value AiRepresents the curve length abscissa space vector value corresponding to the pixel point when the horizontal coordinate number value is i, BiNumber of vertical coordinate representing the pixel pointThe curve length abscissa space vector value corresponding to the value of j,
Figure BDA0002397721720000082
representing that the longitudinal space vector value of the curve length of each pixel point is processed by kernel function,
Figure BDA0002397721720000083
representing the sum of kernel functions on the curve length transverse space vector values of each pixel point, F (A)i,Bj) And the expression standard value set of facial expressions of the student object, such as pleasure, confusion and the like, is acquired after the kernel function processing.
Step A4, comparing the facial expression standard value set of the student object obtained in step A3 with the teaching virtual scene set obtained in step A2, so as to execute the operation of adjusting the adaptive running state and/or running parameters of the current teaching virtual scene;
Figure BDA0002397721720000091
wherein x is0Indicates a subject number, y, after dynamic adjustment0Representing a dynamically adjusted year-level number, z0Indicating a certain knowledge point number value after dynamic adjustment,
Figure BDA0002397721720000092
showing that the current teaching virtual scene data is dynamically adjusted according to the standard value of the facial expression of the student object,
Figure BDA0002397721720000093
representing the adjusted teaching virtual scene data when
Figure BDA0002397721720000094
If not, the teaching virtual scene data is not matched with the teaching virtual scene data required by the student object, and the adaptive operation state of the current teaching virtual scene is required to be executedAnd/or operation of the parameter adjustment.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a teaching interactive system based on a virtual scene provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic structural diagram of a teaching interaction system based on a virtual scene according to an embodiment of the present invention. The teaching interaction system based on the virtual scene comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjusting module; wherein the content of the first and second substances,
the actual teaching data acquisition module is used for acquiring teaching related data about a teacher object and/or a student object in a history teaching process;
the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data;
the teaching virtual scene construction module is used for constructing preparation data according to the scene and constructing teaching virtual scenes in different modes;
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the operation process of the teaching virtual scene;
the virtual scene adjusting module is used for adjusting the operation state and/or the operation parameters of the current teaching virtual scene according to the object state change data.
Preferably, the actual teaching data acquisition module comprises a teaching objective data acquisition submodule, a teacher object related data acquisition submodule and a student object related data acquisition submodule; wherein the content of the first and second substances,
the teaching objective data acquisition submodule is used for acquiring corresponding teaching environment data and/or teaching knowledge content data in different historical teaching stages to serve as part of the teaching related data;
the teacher object related data acquisition submodule is used for acquiring teaching state data of a teacher object in different historical teaching stages to serve as part of the teaching related data;
the student object related data acquisition submodule is used for acquiring learning state data of the student object in different historical teaching stages to serve as part of the teaching related data.
Preferably, the actual teaching data acquisition module further comprises a historical teaching stage decomposition sub-module, a teacher object determination sub-module and a student object determination sub-module; wherein the content of the first and second substances,
the historical teaching stage decomposition submodule is used for decomposing the historical teaching process according to a preset teaching progress and/or preset teaching course setting so as to obtain corresponding different historical teaching stages;
the teacher object determining submodule is used for determining a teacher object corresponding to the teacher object related data acquisition submodule according to preset teaching requirements;
the student object determining submodule is used for acquiring the student objects correspondingly acted by the submodule according to the relevant data of the student objects.
Preferably, the teaching data processing module comprises a data attribute identification submodule, a data classification submodule, a data extraction submodule and a data transformation submodule; wherein the content of the first and second substances,
the data attribute identification submodule is used for carrying out data attribute identification processing on the teaching related data about a data storage form and/or data content so as to obtain attribute information about the teaching related data;
the data classification submodule is used for classifying the teaching related data according to the attribute information so as to obtain teaching related data sets related to different data storage forms and/or different data contents;
the data extraction submodule is used for extracting data validity from the teaching related data set so as to obtain an effective teaching related data set meeting preset validity conditions;
the data transformation submodule is used for carrying out transformation processing on the matching of teaching scenes on the effective teaching related data set so as to obtain corresponding scene construction preparation data.
Preferably, the data extraction submodule comprises a data confidence calculation unit, a confidence evaluation unit and a data extraction execution unit; wherein the content of the first and second substances,
the data confidence degree calculation unit is used for calculating an actual data confidence degree value corresponding to the teaching related data set;
the confidence evaluation unit is used for comparing and evaluating the actual data confidence value and the expected data confidence range so as to determine the data validity of the teaching related data set;
the data extraction execution unit is used for executing the extraction processing according to the data validity so as to obtain the effective teaching related data set meeting the preset validity condition.
Preferably, the data transformation submodule comprises a teaching scene matching degree calculation unit and a data transformation execution unit; wherein the content of the first and second substances,
the teaching scene matching degree calculating unit is used for calculating a teaching scene matching degree value corresponding to the effective teaching related data set;
the data transformation executing unit is used for executing the transformation processing according to the teaching scene matching value so as to obtain corresponding scene construction preparation data.
Preferably, the teaching virtual scene construction module comprises a teaching virtual scene deep learning neural network sub-module, a teaching virtual sub-scene matching sub-module, a teaching virtual sub-scene splicing sub-module and a teaching virtual scene pre-judging sub-module; wherein the content of the first and second substances,
the teaching virtual scene deep learning neural network sub-module is used for analyzing and processing the scene construction preparation data through a preset teaching virtual scene deep learning neural network model so as to obtain a plurality of teaching virtual sub-scenes;
the teaching virtual sub-scene matching sub-module is used for calculating a first matching value between different teaching virtual sub-scenes and a second matching value between each teaching virtual sub-scene and different preset scene modes;
the teaching virtual sub-scene splicing sub-module is used for splicing different teaching virtual sub-scenes according to the first matching value and/or the second matching value so as to obtain corresponding teaching virtual scenes;
the teaching virtual scene pre-judging submodule is used for pre-judging the scene applicability of the teaching virtual scene so as to determine an applicability ranking list of different teaching virtual scenes.
Preferably, the teaching virtual sub-scene splicing sub-module comprises a sub-scene classifying unit and a sub-scene splicing execution unit; wherein the content of the first and second substances,
the sub-scene classifying unit is used for classifying the different teaching virtual sub-scenes into a plurality of teaching virtual sub-scene sets with splicing feasibility according to the first matching value and/or the second matching value;
the sub-scene splicing execution unit is used for splicing different teaching virtual sub-scenes in the teaching virtual sub-scene set so as to obtain the corresponding teaching virtual scene.
Preferably, the virtual scene operation monitoring module comprises an external environment state change determining submodule, a teacher object state change determining submodule and a student object state change determining submodule; wherein the content of the first and second substances,
the external environment state change determining submodule is used for determining external environment state change data corresponding to the teaching virtual scene in the operation process;
the teacher object state change determining submodule is used for determining corresponding teacher object state change data of the teaching virtual scene in the operation process;
the student object state change determining submodule is used for determining corresponding student object state change data of the teaching virtual scene in the operation process.
Preferably, the external environment state change determining submodule includes an external environment sound data determining unit, an external environment illuminance data determining unit, and an external environment temperature data determining unit; wherein the content of the first and second substances,
the external environment sound data determining unit is used for determining external environment sound change data corresponding to the teaching virtual scene in the operation process;
the external environment illumination data determining unit is used for determining external environment illumination change data corresponding to the teaching virtual scene in the operation process;
the external environment temperature data determining unit is used for determining external environment temperature change data corresponding to the teaching virtual scene in the operation process.
Preferably, the teacher object state change determination sub-module includes a teacher object sound data determination unit and a teacher object limb motion data determination unit; wherein the content of the first and second substances,
the teacher object sound data determining unit is used for determining teaching sound data of a teacher object in the operation process of the teaching virtual scene;
the teacher object limb action data determining unit is used for determining teaching limb action data of the teacher object in the operation process of the teaching virtual scene.
Preferably, the student object state change determining submodule comprises a student object face data determining unit and a student object limb action data determining unit; wherein the content of the first and second substances,
the student object face data determining unit is used for determining face expression data of the student object in the operation process of the teaching virtual scene;
the student object limb action data determining unit is used for determining the class attending limb action data of the student object in the operation process of the teaching virtual scene.
Preferably, the virtual scene adjusting module comprises a virtual scene atmosphere adjusting submodule, a virtual scene three-dimensional space adjusting submodule and a virtual scene dynamic progress adjusting submodule; wherein the content of the first and second substances,
the virtual scene atmosphere adjusting submodule is used for adjusting scene operation sound and/or scene operation illumination of the current teaching virtual scene according to the object state change data;
the virtual scene three-dimensional space adjusting submodule is used for adjusting the depth of field and/or the fusion of a scene operation three-dimensional space of the current teaching virtual scene according to the object state change data;
the virtual scene dynamic progress adjusting submodule is used for adjusting the scene operation dynamic progress of the current teaching virtual scene according to the object state change data.
Preferably, the scene operation object monitoring module is configured to acquire object state change data of the teacher object and/or the student object in the operation process of the teaching virtual scene; wherein the content of the first and second substances,
the method further comprises the steps of accurately combining the standardized teaching virtual scenes according to the difference of the difficulty of each knowledge point, and executing the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene according to the change data of the facial expressions of the student objects during the running period of the teaching virtual scenes, wherein the specific implementation steps are as follows:
step A1, constructing preparation data according to scenes obtained by preprocessing the teaching related data, and carrying out preliminary statistical classification processing according to characteristic parameters of disciplines, grades and difficulty grades of knowledge points to obtain a standardized teaching virtual scene database;
step A2, combining the characteristic parameters of the standardized teaching virtual scene database obtained in the step A1, and obtaining a teaching virtual scene set through normalization processing of a formula (1);
Figure BDA0002397721720000141
wherein e is a natural constant, N represents the total number of disciplines in the standardized teaching virtual scene database, m represents the total number of grades in the standardized teaching virtual scene database, x represents a number value (the number value is an integer 0,1,2,3, … N) corresponding to a certain discipline, y represents a number value corresponding to a certain grade, z represents a knowledge point number value, and S representsxIndicates a certain subject S, G with a number xyIndicating a certain year G, L with a number yzIndicating a certain knowledge point L with a number value z,
Figure BDA0002397721720000151
representing the normalization of said characteristic parameters and the random combination, Vir (S)x,Gy,Lz) Representing the acquired teaching virtual scene set;
a3, in the operation process of a teaching virtual scene, acquiring facial expression state change data of a student object according to a formula (2), and performing kernel function assignment processing to acquire a facial expression standard value set of the student object;
Figure BDA0002397721720000152
where pi is the circumferential ratio, exp is an exponential function with the natural constant e as the base, sin and cos are sine and cosine functions, respectively, and KRepresenting the number of image pixel points of effective areas of the face such as eyelids, lip corners, forehead and the like in the real-time acquisition image of the scene operation object monitoring module, r representing the value of the diagonal line of each pixel point, i representing the transverse coordinate number value of each acquired pixel point, j representing the longitudinal coordinate number value of each acquired pixel point, A0Representing the curve length transverse space vector value B corresponding to the pixel point when the transverse coordinate number value of the pixel point is 0 and the horizontal coordinate number value of the pixel point is 0 when the lower right corner of the facial expression image is taken as a reference point and extends leftwards0The expression takes the lower right corner of the facial expression image as a reference point, and when the image extends upwards, the longitudinal coordinate number value of the pixel point is 0, the corresponding curve length longitudinal space vector value AiRepresents the curve length abscissa space vector value corresponding to the pixel point when the horizontal coordinate number value is i, BiRepresents the horizontal coordinate space vector value of the curve length corresponding to the pixel point when the longitudinal coordinate number value is j,
Figure BDA0002397721720000153
representing that the longitudinal space vector value of the curve length of each pixel point is processed by kernel function,
Figure BDA0002397721720000161
representing the sum of kernel functions on the curve length transverse space vector values of each pixel point, F (A)i,Bj) And the expression standard value set of facial expressions of the student object, such as pleasure, confusion and the like, is acquired after the kernel function processing.
Step A4, comparing the facial expression standard value set of the student object obtained in step A3 with the teaching virtual scene set obtained in step A2, so as to execute the operation of adjusting the adaptive running state and/or running parameters of the current teaching virtual scene;
Figure BDA0002397721720000162
wherein x is0Indicates a subject number, y, after dynamic adjustment0Representing a dynamically adjusted year-level number, z0To representThe number value of a certain knowledge point after dynamic adjustment,
Figure BDA0002397721720000163
showing that the current teaching virtual scene data is dynamically adjusted according to the standard value of the facial expression of the student object,
Figure BDA0002397721720000164
representing the adjusted teaching virtual scene data when
Figure BDA0002397721720000165
If the current teaching virtual scene data is not 1, the teaching virtual scene data is not matched with the teaching virtual scene data required by the student object, and the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene needs to be executed.
The beneficial effects of the above technical scheme are: the technical scheme is that the scene operation object monitoring module collects facial expression data of a student object in real time and analyzes the facial expression data so as to judge whether the teaching virtual scene data generated according to the requirements of the student object is consistent with the comprehension capability of the student object; the technical scheme provides technical support for the online intelligent automatic dynamic adjustment of the teaching virtual scene based on the teaching interactive system of the virtual scene, and the corresponding teaching virtual scene is formulated according to the characteristics of each student object, so that the teaching interest and the teaching efficiency are greatly improved.
It can be known from the content of the above embodiment that, the virtual scene based teaching interaction system constructs preparation data based on the scene obtained by preprocessing the teaching related data, constructs teaching virtual scenes in different modes, and adjusts the adaptive operating state and/or operating parameters of the teaching virtual scenes according to the state change data of teachers and/or students during the operation of the teaching virtual scenes, so that adaptive teaching scenes can be switched according to different teaching contents and teaching requirements, the interactivity and scene variability during the teaching process are improved, and the virtual scene technology is fully fused and applied to the teaching, thereby improving the teaching efficiency and the teaching interest.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. The utility model provides a teaching interactive system based on virtual scene which characterized in that:
the teaching interaction system based on the virtual scene comprises an actual teaching data acquisition module, a teaching data processing module, a teaching virtual scene construction module, a virtual scene operation monitoring module and a virtual scene adjusting module; wherein the content of the first and second substances,
the actual teaching data acquisition module is used for acquiring teaching related data about a teacher object and/or a student object in a history teaching process;
the teaching data processing module is used for preprocessing the teaching related data to obtain corresponding scene construction preparation data;
the teaching virtual scene construction module is used for constructing preparation data according to the scene and constructing teaching virtual scenes in different modes;
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the operation process of the teaching virtual scene;
and the virtual scene adjusting module is used for carrying out adaptive operation state and/or operation parameter adjustment on the current teaching virtual scene according to the object state change data.
2. The virtual scene-based instructional interaction system of claim 1, wherein:
the actual teaching data acquisition module comprises a teaching objective data acquisition submodule, a teacher object related data acquisition submodule and a student object related data acquisition submodule; wherein the content of the first and second substances,
the teaching objective data acquisition submodule is used for acquiring corresponding teaching environment data and/or teaching knowledge content data in different historical teaching stages to serve as part of the teaching related data;
the teacher object related data acquisition submodule is used for acquiring teaching state data of a teacher object in different historical teaching stages to serve as part of the teaching related data;
the student object related data acquisition submodule is used for acquiring learning state data of the student object in different historical teaching stages to serve as part of the teaching related data.
3. The virtual scene-based instructional interaction system of claim 2, wherein:
the actual teaching data acquisition module also comprises a historical teaching stage decomposition sub-module, a teacher object determination sub-module and a student object determination sub-module; wherein the content of the first and second substances,
the historical teaching stage decomposition submodule is used for decomposing the historical teaching process according to a preset teaching progress and/or preset teaching course setting so as to obtain corresponding different historical teaching stages;
the teacher object determining submodule is used for determining a teacher object corresponding to the teacher object related data acquisition submodule according to preset teaching requirements;
the student object determining submodule is used for acquiring the student objects correspondingly acted by the submodule according to the relevant data of the student objects.
4. The virtual scene-based instructional interaction system of claim 1, wherein:
the teaching data processing module comprises a data attribute identification submodule, a data classification submodule, a data extraction submodule and a data transformation submodule; wherein the content of the first and second substances,
the data attribute identification submodule is used for carrying out data attribute identification processing on the teaching related data in a data storage form and/or data content so as to obtain attribute information on the teaching related data;
the data classification submodule is used for classifying the teaching related data according to the attribute information so as to obtain teaching related data sets related to different data storage forms and/or different data contents;
the data extraction submodule is used for extracting data validity from the teaching related data set so as to obtain an effective teaching related data set meeting preset validity conditions;
and the data transformation submodule is used for carrying out transformation processing on the matching of teaching scenes on the effective teaching related data set so as to obtain corresponding scene construction preparation data.
5. The virtual scene-based instructional interaction system of claim 4, wherein
The data extraction submodule comprises a data confidence coefficient calculation unit, a confidence coefficient evaluation unit and a data extraction execution unit; wherein the content of the first and second substances,
the data confidence degree calculation unit is used for calculating an actual data confidence degree value corresponding to the teaching related data set;
the confidence evaluation unit is used for comparing and evaluating the actual data confidence value and an expected data confidence range so as to determine the data validity of the teaching related data set;
the data extraction execution unit is used for executing the extraction processing according to the data validity so as to obtain the effective teaching related data set meeting the preset validity condition;
alternatively, the first and second electrodes may be,
the data transformation submodule comprises a teaching scene matching degree calculation unit and a data transformation execution unit; wherein the content of the first and second substances,
the teaching scene matching degree calculating unit is used for calculating a teaching scene matching degree value corresponding to the effective teaching related data set;
the data transformation executing unit is used for executing the transformation processing according to the teaching scene matching value so as to obtain corresponding scene construction preparation data.
6. The virtual scene-based instructional interaction system of claim 1, wherein:
the teaching virtual scene construction module comprises a teaching virtual scene deep learning neural network sub-module, a teaching virtual sub-scene matching sub-module, a teaching virtual sub-scene splicing sub-module and a teaching virtual scene pre-judging sub-module; wherein the content of the first and second substances,
the teaching virtual scene deep learning neural network sub-module is used for analyzing and processing the scene construction preparation data through a preset teaching virtual scene deep learning neural network model so as to obtain a plurality of teaching virtual sub-scenes;
the teaching virtual sub-scene matching sub-module is used for calculating a first matching value between different teaching virtual sub-scenes and a second matching value between each teaching virtual sub-scene and different preset scene modes;
the teaching virtual sub-scene splicing sub-module is used for splicing different teaching virtual sub-scenes according to the first matching value and/or the second matching value so as to obtain corresponding teaching virtual scenes;
the teaching virtual scene pre-judging submodule is used for pre-judging the scene applicability of the teaching virtual scene so as to determine an applicability ordered list of different teaching virtual scenes.
7. The virtual scene-based instructional interaction system of claim 6, wherein:
the teaching virtual sub-scene splicing sub-module comprises a sub-scene classifying unit and a sub-scene splicing execution unit; wherein the content of the first and second substances,
the sub-scene classifying unit is used for classifying the different teaching virtual sub-scenes according to the first matching value and/or the second matching value to form a plurality of teaching virtual sub-scene sets with splicing feasibility;
and the sub-scene splicing execution unit is used for splicing different teaching virtual sub-scenes in the teaching virtual sub-scene set so as to obtain the corresponding teaching virtual scene.
8. The virtual scene-based instructional interaction system of claim 1, wherein:
the virtual scene operation monitoring module comprises an external environment state change determining submodule, a teacher object state change determining submodule and a student object state change determining submodule; wherein the content of the first and second substances,
the external environment state change determining submodule is used for determining external environment state change data corresponding to the teaching virtual scene in the operation process;
the teacher object state change determining submodule is used for determining corresponding teacher object state change data of the teaching virtual scene in the operation process;
the student object state change determining submodule is used for determining corresponding student object state change data of the teaching virtual scene in the operation process;
the external environment state change determining submodule comprises an external environment sound data determining unit, an external environment illumination data determining unit and an external environment temperature data determining unit; wherein the content of the first and second substances,
the external environment sound data determining unit is used for determining external environment sound change data corresponding to the teaching virtual scene in the operation process;
the external environment illumination data determining unit is used for determining external environment illumination change data corresponding to the teaching virtual scene in the operation process;
the external environment temperature data determining unit is used for determining external environment temperature change data corresponding to the teaching virtual scene in the operation process;
alternatively, the first and second electrodes may be,
the teacher object state change determining submodule comprises a teacher object sound data determining unit and a teacher object limb action data determining unit; wherein the content of the first and second substances,
the teacher object sound data determining unit is used for determining teaching sound data of a teacher object in the operation process of the teaching virtual scene;
the teacher object limb action data determining unit is used for determining teaching limb action data of the teacher object in the operation process of the teaching virtual scene;
alternatively, the first and second electrodes may be,
the student object state change determining submodule comprises a student object face data determining unit and a student object limb action data determining unit; wherein the content of the first and second substances,
the student object face data determining unit is used for determining face expression data of the student object in the operation process of the teaching virtual scene;
the student object limb action data determining unit is used for determining the class attending limb action data of the student object in the operation process of the teaching virtual scene.
9. The virtual scene-based instructional interaction system of claim 1, wherein:
the virtual scene adjusting module comprises a virtual scene atmosphere adjusting submodule, a virtual scene three-dimensional space adjusting submodule and a virtual scene dynamic progress adjusting submodule; wherein the content of the first and second substances,
the virtual scene atmosphere adjusting submodule is used for adjusting scene operation sound and/or scene operation illumination of the current teaching virtual scene according to the object state change data;
the virtual scene three-dimensional space adjusting submodule is used for adjusting the depth of field and/or the fusion of a scene operation three-dimensional space of the current teaching virtual scene according to the object state change data;
and the virtual scene dynamic progress adjusting submodule is used for adjusting the scene operation dynamic progress of the current teaching virtual scene according to the object state change data.
10. The virtual scene-based instructional interaction system of claim 1, wherein:
the scene operation object monitoring module is used for acquiring object state change data of the teacher object and/or the student object in the operation process of the teaching virtual scene; the method further comprises the steps of accurately combining the standardized teaching virtual scenes according to the difference of the difficulty of each knowledge point, and executing the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene according to the change data of the facial expressions of the student objects during the running period of the teaching virtual scenes, wherein the specific implementation steps are as follows:
step A1, constructing preparation data according to scenes obtained by preprocessing the teaching related data, and carrying out preliminary statistical classification processing according to characteristic parameters of disciplines, grades and difficulty grades of knowledge points to obtain a standardized teaching virtual scene database;
step A2, combining the characteristic parameters of the standardized teaching virtual scene database obtained in the step A1, and obtaining a teaching virtual scene set through normalization processing of a formula (1);
Figure FDA0002397721710000061
wherein e is a natural constant, n represents the total number of disciplines in the standardized teaching virtual scene database, m represents the total number of grades in the standardized teaching virtual scene database, x represents a number value corresponding to a discipline, y represents a number value corresponding to a grade, z represents a knowledge point number value, and S representsxIndicates a certain subject S, G with a number xyIndicating a certain year G, L with a number yzIndicating a certain knowledge point L with a number value z,
Figure FDA0002397721710000062
representing the normalization of said characteristic parameters and the random combination, Vir (S)x,Gy,Lz) Representing the acquired teaching virtual scene set;
a3, in the operation process of a teaching virtual scene, acquiring facial expression state change data of a student object according to a formula (2), and performing kernel function assignment processing to acquire a facial expression standard value set of the student object;
Figure FDA0002397721710000071
wherein pi is a circumferential rate, exp is an exponential function with a natural constant e as a base, sin and cos are respectively a sine function and a cosine function, K represents the number of image pixels of effective areas of a face, such as eyelids, lip angles, forehead and the like, in the real-time image collected by the scene operation object monitoring module, r represents the value of the diagonal line of each pixel, i represents the number value of the transverse coordinates of each collected pixel, j represents the number value of the longitudinal coordinates of each collected pixel, A represents the number value of the longitudinal coordinates of each collected pixel, and0representing the curve length transverse space vector value B corresponding to the pixel point when the transverse coordinate number value of the pixel point is 0 and the horizontal coordinate number value of the pixel point is 0 when the lower right corner of the facial expression image is taken as a reference point and extends leftwards0The expression takes the lower right corner of the facial expression image as a reference point, and when the image extends upwards, the longitudinal coordinate number value of the pixel point is 0, the corresponding curve length longitudinal space vector value AiRepresents the curve length abscissa space vector value corresponding to the pixel point when the horizontal coordinate number value is i, BiRepresents the horizontal coordinate space vector value of the curve length corresponding to the pixel point when the longitudinal coordinate number value is j,
Figure FDA0002397721710000072
representing that the longitudinal space vector value of the curve length of each pixel point is processed by kernel function,
Figure FDA0002397721710000073
representing the sum of kernel functions on the curve length transverse space vector values of each pixel point, F (A)i,Bj) Representing facial expressions like happy of the student subject obtained after the kernel function processingFacial expression standard value sets, puzzles, etc.
Step A4, comparing the facial expression standard value set of the student object obtained in step A3 with the teaching virtual scene set obtained in step A2, so as to execute the operation of adjusting the adaptive running state and/or running parameters of the current teaching virtual scene;
Figure FDA0002397721710000081
wherein x is0Indicates a subject number, y, after dynamic adjustment0Representing a dynamically adjusted year-level number, z0Indicating a certain knowledge point number value after dynamic adjustment,
Figure FDA0002397721710000082
showing that the current teaching virtual scene data is dynamically adjusted according to the standard value of the facial expression of the student object,
Figure FDA0002397721710000083
representing the adjusted teaching virtual scene data when
Figure FDA0002397721710000084
If the current teaching virtual scene data is not 1, the teaching virtual scene data is not matched with the teaching virtual scene data required by the student object, and the operation of adaptively adjusting the running state and/or the running parameters of the current teaching virtual scene needs to be executed.
CN202010137104.2A 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene Active CN111258433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010137104.2A CN111258433B (en) 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010137104.2A CN111258433B (en) 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene

Publications (2)

Publication Number Publication Date
CN111258433A true CN111258433A (en) 2020-06-09
CN111258433B CN111258433B (en) 2024-04-02

Family

ID=70947494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010137104.2A Active CN111258433B (en) 2020-03-02 2020-03-02 Teaching interaction system based on virtual scene

Country Status (1)

Country Link
CN (1) CN111258433B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985582A (en) * 2020-09-27 2020-11-24 上海松鼠课堂人工智能科技有限公司 Knowledge point mastering degree evaluation method based on learning behaviors
CN112017496A (en) * 2020-08-30 2020-12-01 上海松鼠课堂人工智能科技有限公司 Student computing power analysis method based on game learning
CN112017085A (en) * 2020-08-18 2020-12-01 上海松鼠课堂人工智能科技有限公司 Intelligent virtual teacher image personalization method
CN112508162A (en) * 2020-11-17 2021-03-16 珠海格力电器股份有限公司 Emergency management method, device and system based on system linkage
CN113096252A (en) * 2021-03-05 2021-07-09 华中师范大学 Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN113409635A (en) * 2021-06-17 2021-09-17 上海松鼠课堂人工智能科技有限公司 Interactive teaching method and system based on virtual reality scene
CN115100004A (en) * 2022-06-23 2022-09-23 北京新唐思创教育科技有限公司 Online teaching system, method, device, equipment and medium
CN115114537A (en) * 2022-08-29 2022-09-27 成都航空职业技术学院 Interactive virtual teaching aid implementation method based on file content identification

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140335497A1 (en) * 2007-08-01 2014-11-13 Michael Gal System, device, and method of adaptive teaching and learning
WO2017193709A1 (en) * 2016-05-12 2017-11-16 深圳市鹰硕技术有限公司 Internet-based teaching and learning method and system
CN110069139A (en) * 2019-05-08 2019-07-30 上海优谦智能科技有限公司 VR technology realizes the experiencing system of Tourism teaching practice

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140335497A1 (en) * 2007-08-01 2014-11-13 Michael Gal System, device, and method of adaptive teaching and learning
WO2017193709A1 (en) * 2016-05-12 2017-11-16 深圳市鹰硕技术有限公司 Internet-based teaching and learning method and system
CN110069139A (en) * 2019-05-08 2019-07-30 上海优谦智能科技有限公司 VR technology realizes the experiencing system of Tourism teaching practice

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡华;: "虚拟现实技术在工程制图课件中的应用" *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017085A (en) * 2020-08-18 2020-12-01 上海松鼠课堂人工智能科技有限公司 Intelligent virtual teacher image personalization method
CN112017085B (en) * 2020-08-18 2021-07-20 上海松鼠课堂人工智能科技有限公司 Intelligent virtual teacher image personalization method
CN112017496A (en) * 2020-08-30 2020-12-01 上海松鼠课堂人工智能科技有限公司 Student computing power analysis method based on game learning
CN111985582A (en) * 2020-09-27 2020-11-24 上海松鼠课堂人工智能科技有限公司 Knowledge point mastering degree evaluation method based on learning behaviors
CN112508162B (en) * 2020-11-17 2024-04-05 珠海格力电器股份有限公司 Emergency management method, device and system based on system linkage
CN112508162A (en) * 2020-11-17 2021-03-16 珠海格力电器股份有限公司 Emergency management method, device and system based on system linkage
CN113096252A (en) * 2021-03-05 2021-07-09 华中师范大学 Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN113096252B (en) * 2021-03-05 2021-11-02 华中师范大学 Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN113409635A (en) * 2021-06-17 2021-09-17 上海松鼠课堂人工智能科技有限公司 Interactive teaching method and system based on virtual reality scene
CN115100004B (en) * 2022-06-23 2023-05-30 北京新唐思创教育科技有限公司 Online teaching system, method, device, equipment and medium
CN115100004A (en) * 2022-06-23 2022-09-23 北京新唐思创教育科技有限公司 Online teaching system, method, device, equipment and medium
CN115114537A (en) * 2022-08-29 2022-09-27 成都航空职业技术学院 Interactive virtual teaching aid implementation method based on file content identification
CN115114537B (en) * 2022-08-29 2022-11-22 成都航空职业技术学院 Interactive virtual teaching aid implementation method based on file content identification

Also Published As

Publication number Publication date
CN111258433B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN111258433A (en) Teaching interactive system based on virtual scene
CN111738908B (en) Scene conversion method and system for generating countermeasure network by combining instance segmentation and circulation
CN108629338B (en) Face beauty prediction method based on LBP and convolutional neural network
CN112131978B (en) Video classification method and device, electronic equipment and storage medium
CN109214298B (en) Asian female color value scoring model method based on deep convolutional network
CN110135282B (en) Examinee return plagiarism cheating detection method based on deep convolutional neural network model
CN112183238B (en) Remote education attention detection method and system
CN112132197A (en) Model training method, image processing method, device, computer equipment and storage medium
CN110796018A (en) Hand motion recognition method based on depth image and color image
CN110188600B (en) Drawing evaluation method, system and storage medium
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN115810163B (en) Teaching evaluation method and system based on AI classroom behavior recognition
CN113505854A (en) Method, device, equipment and medium for constructing facial image quality evaluation model
CN113723530A (en) Intelligent psychological assessment system based on video analysis and electronic psychological sand table
CN116052222A (en) Cattle face recognition method for naturally collecting cattle face image
CN111814733A (en) Concentration degree detection method and device based on head posture
CN115546861A (en) Online classroom concentration degree identification method, system, equipment and medium
CN115205626A (en) Data enhancement method applied to field of coating defect detection
CN111275020A (en) Room state identification method
CN111626781A (en) Advertisement putting method based on artificial intelligence
CN111243373B (en) Panoramic simulation teaching system
CN113822907A (en) Image processing method and device
CN113569616A (en) Content identification method and device, storage medium and electronic equipment
CN116259104A (en) Intelligent dance action quality assessment method, device and system
CN110443277A (en) A small amount of sample classification method based on attention model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200237 9 / F and 10 / F, building 2, No. 188, Yizhou Road, Xuhui District, Shanghai

Applicant after: Shanghai squirrel classroom Artificial Intelligence Technology Co.,Ltd.

Address before: 200237 9 / F and 10 / F, building 2, No. 188, Yizhou Road, Xuhui District, Shanghai

Applicant before: SHANGHAI YIXUE EDUCATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant