CN111047930B - Processing method and device and electronic equipment - Google Patents

Processing method and device and electronic equipment Download PDF

Info

Publication number
CN111047930B
CN111047930B CN201911205436.3A CN201911205436A CN111047930B CN 111047930 B CN111047930 B CN 111047930B CN 201911205436 A CN201911205436 A CN 201911205436A CN 111047930 B CN111047930 B CN 111047930B
Authority
CN
China
Prior art keywords
video content
information
environment
playing
recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911205436.3A
Other languages
Chinese (zh)
Other versions
CN111047930A (en
Inventor
姚涔
张印帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911205436.3A priority Critical patent/CN111047930B/en
Publication of CN111047930A publication Critical patent/CN111047930A/en
Application granted granted Critical
Publication of CN111047930B publication Critical patent/CN111047930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a processing method, which comprises the following steps: playing first video content for teaching, wherein the first video content comprises a plurality of objects in a recording environment; obtaining target object information in a current playing environment at least based on interaction information of a first object and a second object in the first video content; replacing a first object and/or a second object in the first video content with the target object based on the target object information to play video content including the target object; wherein the first object and the second object are of the same or different types. The embodiment of the application also discloses a processing device and electronic equipment. The embodiment of the application can improve the classroom interactivity of remote teaching, so that the remote teaching can be closer to real teaching, and the efficiency of the remote teaching is improved.

Description

Processing method and device and electronic equipment
Technical Field
The present application relates to the field of education technologies, and in particular, to a processing method and apparatus, and an electronic device.
Background
With the development of the education and teaching field, the electronic classroom gradually replaces the existing fixed classroom site teaching mode, so that the modern teaching is more efficient, and particularly, the remote classroom solves the problem of teacher and resource shortage in remote areas.
In the related art, the classroom video data recorded in advance is usually subjected to remote teaching in a remote area where teachers and materials are in shortage, but the remote teaching mode cannot correspond interactive links in teaching videos to real students in the current environment, so that classroom interactive experience of remote teaching is poor, and the efficiency of remote teaching is reduced.
Disclosure of Invention
In view of the above, embodiments of the present application are intended to provide a processing method, a processing apparatus, and an electronic device.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a processing method, including:
playing first video content for teaching, wherein the first video content comprises a plurality of objects in a recording environment;
obtaining target object information in a current playing environment at least based on interaction information of a first object and a second object in the first video content;
replacing a first object and/or a second object in the first video content with the target object based on the target object information to play video content including the target object;
wherein the first object and the second object are of the same or different types.
Optionally, the method further comprises: obtaining first video content; wherein the obtaining of the first video content comprises:
obtaining video contents under a plurality of recording environments;
obtaining analysis data under each recording environment based on the video content, wherein the analysis data at least comprises the correlation of the recording objects;
determining target video content from the video content based on the environment to be played;
and replacing content data matched with the environment to be played in other video contents to the target video content based on the correlation of the recording object to obtain the first video content.
Optionally, before the at least based on the interaction information of the first object and the second object in the first video content, the method further includes:
acquiring interaction information of a first object and a second object in the first video content; wherein, the acquisition mode of the interaction information at least comprises one of the following items: obtaining the preloading information based on the first video content, obtaining the abstract information based on the first video content, or obtaining the marking information based on the video content recording process.
Optionally, the obtaining target object information in the current playing environment includes:
determining a first interaction behavior parameter between a first object and a second object which accord with an interaction condition in a recording environment;
in the process of playing the first video content, acquiring a second interaction behavior parameter between a third object and a fourth object in a playing environment at least based on the playing parameter of the first video content;
determining the third object and the fourth object as target objects in the current playing environment when detecting that the matching degree of the second interactive behavior parameter and the first interactive behavior parameter accords with a first threshold value;
wherein the first object and the second object are of the same type, and the interaction condition is at least related to interaction information between the first object and the second object.
Optionally, the obtaining target object information in the current playing environment includes:
in the process of playing the first video content, acquiring first related information of a fifth object in a playing environment at least based on the first video content;
determining the fifth object as a target object in the current playing environment under the condition that the association degree of the first related information and the second related information accords with a second threshold value;
wherein the second related information is related to a first object and a second object in the second object, which meet the questioning condition, in the recording environment.
Optionally, detecting that the association degree of the fifth object and the second object in at least one of the identification information, the class situation information and the learning situation information meets a second threshold, and determining that the fifth object is the target object, wherein the fifth object and the second object are of the same type; or the like, or, alternatively,
and determining that the fifth object is the target object when the association degree of the fifth object and the second object is detected to meet a second threshold, wherein the fifth object and the second object are different in type.
Optionally, the obtaining target object information in the current playing environment includes:
in the process of playing the first video content, acquiring historical behavior information of a sixth object in a playing environment at least based on the first video content;
determining that the sixth object is a target object in the current playing environment under the condition that the similarity between the historical behavior information and the current behavior information of the first object meets a third threshold or a fourth threshold;
and the current behavior information represents the behavior of the first object which meets the answering condition in the recording environment.
Optionally, the method further comprises:
acquiring behavior difference parameters of the target object and the replaced object;
and updating the first video content to play the updated video content comprising the target object under the condition that the behavior difference parameter meets a fifth threshold value.
In a second aspect, an embodiment of the present application provides an electronic device, including a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program operable on the processor;
the processor is configured to execute the steps of the processing method as described above when the computer program is executed.
In a third aspect, an embodiment of the present application provides a processing apparatus, including:
the system comprises a playing unit, a recording unit and a playing unit, wherein the playing unit is used for playing first video content used for teaching, and the first video content comprises a plurality of objects in a recording environment;
the acquisition unit is used for acquiring target object information in the current playing environment at least based on the interaction information of the first object and the second object in the first video content;
a replacing unit configured to replace a first object and/or a second object in the first video content with the target object based on the target object information to play video content including the target object; wherein the first object and the second object are of the same or different types.
In a fourth aspect, the present application provides a storage medium storing a computer program, where the computer program implements the steps of the processing method described above when executed by a processor.
According to the processing method, the processing device and the electronic equipment, first video content for teaching is played, wherein the first video content comprises a plurality of objects in a recording environment; obtaining target object information in a current playing environment at least based on interaction information of a first object and a second object in first video content; replacing the first object and/or the second object in the first video content with the target object based on the target object information to play the video content including the target object; wherein the first object and the second object are of the same or different types. Therefore, in the playing process of the first video content, the target object information in the current playing environment can be obtained according to the interactive information of the first object and the second object in the first video content, and then the first object and/or the second object in the first video content are/is replaced by the target object information, so that the remote teaching in a remote area is realized, and the interaction in the first video content for teaching can be matched with the target object in the current playing environment, namely, the classroom interactivity of the remote teaching is improved, so that the remote teaching can be closer to the real teaching, and the efficiency of the remote teaching is further improved.
Drawings
Fig. 1 is a schematic structural diagram of a remote teaching system according to a related art;
fig. 2 is a first schematic flow chart of a processing method according to an embodiment of the present disclosure;
FIG. 3 is a block flow diagram of a remote instruction system provided in an embodiment of the present application;
fig. 4 is a second flowchart illustrating a processing method according to an embodiment of the present application;
fig. 5 is a third schematic flowchart of a processing method according to an embodiment of the present application;
fig. 6 is a fourth schematic flowchart of a processing method according to an embodiment of the present application;
fig. 7 is a fifth flowchart illustrating a processing method according to an embodiment of the present disclosure;
fig. 8 is a sixth schematic flowchart of a processing method according to an embodiment of the present application;
fig. 9 is a first schematic structural diagram illustrating a processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a processing apparatus according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be appreciated that reference throughout this specification to "an embodiment of the present application" or "an embodiment described previously" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in the embodiments of the present application" or "in the embodiments" in various places throughout this specification are not necessarily all referring to the same embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In order to better understand the processing method provided by the embodiment of the present application, the following analysis will be made for the remote teaching in the related art.
Distance teaching, also known as electronic teaching (e-teaching), is a formal teaching and learning system specifically designed for performing distance teaching by using electronic communication means. Remote teaching is a teaching form which adopts a multimedia mode to carry out system teaching and communication, and can also transmit courses to one or more students outside a specific campus to carry out teaching. Modern distance teaching refers to teaching mode of course transmission through audio, video (live or video) and computer technology including real-time and non-real-time. Modern distance teaching is a novel teaching mode generated along with the development of modern information technology, and the development of computer technology, multimedia technology and communication technology makes distance teaching have a qualitative leap, which is called as important educational means in the future.
In remote teaching, complete teaching system support is also needed. Wherein, the teaching system mainly comprises a recording classroom, a remote classroom and a computer network (or a server); here, the recording classroom is a teaching environment of a teacher, and is used for recording or collecting teaching information; the remote classroom is the learning environment of students, is used for receiving teaching information and feeding back student information, and record classroom and remote classroom all are controlled by computer network.
Referring to fig. 1, a schematic structural diagram of a remote teaching system provided in the related art is shown. As shown in fig. 1, the remote teaching system 10 includes a recording classroom 110, remote classrooms 120, 130, and a server 140. Wherein the recording classroom 110 and remote classrooms 120, 130 communicate with each other through a server 140. Specifically, the recording classroom 110 is used to record instructional information to form video content and then send the video content to the server 140; the lessons may be delivered to one or more students outside of a particular campus (e.g., recording classroom 110) via server 140 for playing in a plurality of remote classrooms (e.g., 120 and 130) such that students in the remote classrooms may receive instructional information by viewing video content.
In an actual application scene, remote teaching has various different forms, letter teaching, television teaching and broadcast teaching belong to the remote teaching category, and the teaching modes have many advantages, such as the teaching modes can fully utilize education resources to enable more people to accept education, so that the education is not bound by geographic factors any more, and the problem of teachers and resources shortage in remote areas is solved; however, the method has great limitation in teaching, the online classroom is uniform, especially the teaching interaction is lacked, the interaction link in the teaching video cannot be corresponding to the real students in the current environment, so that the classroom interaction experience of the remote teaching is poor, the remote teaching is limited in some occasions, and the efficiency of the remote teaching is reduced.
Based on this, an embodiment of the present application provides a processing method, in a playing process of a first video content, target object information in a current playing environment may be obtained according to interaction information of a first object and a second object in the first video content, and then the first object and/or the second object in the first video content is replaced by using the target object information, so that not only is remote teaching in a remote area achieved, but also interaction in the first video content for teaching may be matched to the target object in the current playing environment, that is, classroom interactivity of the remote teaching is improved, so that the remote teaching can be closer to real teaching, and further efficiency of the remote teaching is improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, a schematic flow chart of a processing method provided in the embodiment of the present application is shown. As shown in fig. 2, the processing method may include the steps of:
step 201, playing a first video content for teaching.
In the embodiment of the application, the processing method is applied to a processing device or an electronic device integrated with the processing device. The electronic device may be any device with data processing capability, such as a server, a smart phone, a tablet computer, a notebook computer, a palm top computer, a personal digital assistant, a portable media player, a wearable device, a digital television, a desktop computer, or the like.
In step 201, a plurality of objects in a recording environment are included in first video content. Wherein the plurality of objects may be people, such as teachers or students; other objects, such as questions, knowledge points, test questions or answers to test questions, etc., may also be operated, and the embodiments of the present application are not limited in any way.
In some embodiments, before step 201, the processing method may further include: first video content is obtained.
Wherein, the first video content can be obtained by video content recorded in at least one recording environment; here, a camera is configured in each recording environment, and video content in each recording environment can be obtained through signal acquisition of the camera.
In one embodiment, the first video content may be video content recorded in a recording environment.
Here, after obtaining the video content in a recording environment, the target video content may be determined from the video content according to the environment to be played, and then the target video content may be used as the first video content. Or, the analysis data in the recording environment can be obtained by performing data analysis and processing on the video content, and then the video content is partially replaced according to the environment to be played and the obtained analysis data, so as to obtain the first video content.
In one embodiment, the first video content may be video content recorded in a plurality of recording environments.
Here, after obtaining video contents in a plurality of recording environments, analysis data in each recording environment may be obtained by performing data analysis and processing on the video contents, the analysis data including at least the correlation of the recording object; then, determining target video content from the video content based on the environment to be played; and replacing content data matched with the environment to be played in other video contents to the target video content based on the correlation of the recording object, so as to obtain the first video content.
It should be noted that, for the correlation of the recording object, the knowledge point may be used as the analysis data, and all the related content data including a certain knowledge point are integrated to obtain the first video content; or, a teacher may be used as the analysis data, and all the related content data of a certain teacher may be integrated to obtain the first video content.
That is, the first video content may be video content recorded by at least one recording environment (such as a recording classroom as shown in fig. 1). In this way, in the remote teaching, after the first video content is obtained, the first video content can be played in a playing environment (such as a remote classroom shown in fig. 1), so that a teacher or a student in the playing environment can receive teaching information by watching the video content.
Step 202, obtaining target object information in the current playing environment at least based on the interaction information of the first object and the second object in the first video content.
In step 202, the first object and the second object may be of the same type or different types. When the first object and the second object are of the same type, the first object and the second object can be both human beings, namely a teacher and a student, or a student and a student, and the like. Alternatively, when the first object and the second object are of different types, the first object and the second object may refer to a person and an operated object, that is, a student and a test question, or a student and a knowledge point, and the like.
In addition, the target object may be one object or two objects. When the first object and the second object are of the same type, such as a teacher and a student, or a student and a student, the target object may be two objects. Alternatively, when the first object and the second object are of different types, such as students, test questions, and the like, the target object may be one object at this time.
In some embodiments, before step 202, the processing method may further include: the interactive information of the first object and the second object in the first video content is obtained.
The acquisition mode of the interactive information at least comprises one of the following items: obtaining pre-loading information based on the first video content, obtaining summary information based on the first video content, or obtaining marking information based on a video content recording process.
In one embodiment, interaction information of the first object and the second object may be obtained based on pre-loaded information of the first video content. Specifically, when the first video content is played, the information such as the degree of interaction between the first object and the second object, the interaction content, the interaction plot, or the interaction theme can be obtained according to the preloading information, so as to obtain the interaction information of the first object and the second object.
In the embodiment of the application, timestamp information of the first video content can also be acquired; therefore, when the first video content is played, the information such as the interaction degree, the interaction content, the interaction plot or the interaction theme between the first object and the second object can be obtained according to the preloading information and the timestamp information, and the interaction information of the first object and the second object is obtained.
In one embodiment, the interaction information of the first object and the second object may be obtained based on summary information of the first video content. Specifically, when the first video content is played, the information such as the degree of interaction, the interactive content, the interactive plot, or the interactive theme between the first object and the second object can be obtained according to the summary information of the first video content, so as to obtain the interactive information between the first object and the second object.
In one embodiment, the interaction information of the first object and the second object may be obtained based on the mark information of the video content recording process. Specifically, in the recording process of the video content, information such as the interaction degree, the interaction content, the interaction plot or the interaction theme between the first object and the second object is marked; therefore, after the first video content is obtained, when the first video content is played, the information such as the interaction degree, the interaction content, the interaction plot or the interaction theme between the first object and the second object can be obtained according to the mark information, so that the interaction information between the first object and the second object can be obtained.
That is, when playing the first video content, if there is an interaction between the first object and the second object in the next frame, before playing the next frame, it is first necessary to obtain the interaction information between the first object and the second object, so as to subsequently replace the first object and/or the second object with the target object in the current playing environment, so as to continue playing the video content of the next frame. Therefore, the video content of the next frame played already contains the target object in the current playing environment, so that the classroom interactivity of the remote teaching is improved, and the remote teaching can be closer to the real teaching.
In step 202, after the interaction information of the first object and the second object is obtained, the target object information in the current playing environment may be determined according to the interaction information of the first object and the second object.
In one embodiment, the interaction of the first object with the second object may refer to a first interaction scenario between a student and a student, or between a teacher and a student. For example, when a discussion is performed between students or a teacher is discussed with students, the interaction information of the first object and the second object may be the interaction information of the students or the interaction information of the teacher and the students; correspondingly, the target object at this time may be a third object and a fourth object in the playback environment.
Specifically, first interaction behavior parameters between a first object and a second object which accord with interaction conditions in a recording environment are determined; in the process of playing the first video content, acquiring a second interaction behavior parameter between a third object and a fourth object in a playing environment at least based on the playing parameter of the first video content; and determining the third object and the fourth object as target objects in the current playing environment when the matching degree of the second interactive behavior parameter and the first interactive behavior parameter is detected to accord with a first threshold value.
The first object and the second object are of the same type, the interaction condition may include an interaction degree, an interaction content, an interaction plot or an interaction theme, and the interaction condition is at least related to interaction information between the first object and the second object.
It should be noted that, for "determining a first interaction behavior parameter between a first object and a second object meeting an interaction condition in a recording environment", the determination may be based on analyzing the first video content or during a recording process of the video content.
Optionally, may be determined based on analyzing the first video content; specifically, the analysis and the processing are performed according to the pre-loading information or the summary information of the first video content, so that a first interaction behavior parameter between the first object and the second object which meet the interaction condition in the recording environment can be determined.
Alternatively, it may be determined during the recording of the video content; specifically, in the recording process of the video content, information such as the interaction degree, the interaction content, the interaction plot or the interaction theme between the first object and the second object is marked, so that a first interaction behavior parameter between the first object and the second object meeting the interaction condition in the recording environment can be determined according to the marking information.
It should be further noted that, for "acquiring a second interaction behavior parameter between a third object and a fourth object in the playing environment based on the playing parameter of the first video content", the playing parameter may include a playing progress, a playing sound effect, and the like.
Thus, based on the first interactive scene, after the second interactive behavior parameter between the third object and the fourth object in the playing environment is acquired, if the matching degree of the second interactive behavior parameter and the first interactive behavior parameter meets the first threshold, which indicates that the third object and the fourth object in the playing environment and the first object and the second object in the recording environment have the best matching degree, the third object and the fourth object may be determined as the target object in the current playing environment.
In one embodiment, the interaction of the first object with the second object may also refer to a second interaction scenario between the student and the question (or knowledge point, etc.). For example, when a teacher asks a student, the interaction information of the first object and the second object may be the interaction information of the student and the question; correspondingly, the target object may be a fifth object in the playback environment.
Specifically, in the process of playing the first video content, first related information of a fifth object in the playing environment is acquired at least based on the first video content; and determining the fifth object as the target object in the current playing environment under the condition that the association degree of the first related information and the second related information meets a second threshold value.
Wherein the second related information is related to a first object and a second object in the second object, which meet the questioning condition, in the recording environment.
It should be noted that, when the fifth object and the second object are of the same type, if it is detected that the association degree of the fifth object and the second object in at least one of the identification information, the class situation information, and the learning situation information meets the second threshold, it may be determined that the fifth object is the target object. Alternatively, when the fifth object and the second object are of different types, if it is detected that the association degree of the fifth object and the second object meets the second threshold, it may also be determined that the fifth object is the target object.
It should be further noted that the identification information represents basic parameter information of the fifth object or the second object; here, the identification information may include information of a seat position, a school number, a name similarity, and the like. The learning situation information represents learning performance information of a fifth object or a second object; here, the learning information may include information such as a result status, completion of a historical job, accuracy, and degree of grasp of a knowledge point; the course information represents the behavior expression information of the fifth object or the second object; here, the class information may include information such as expressions, behaviors, attentions, and the like.
Thus, based on the second interactive scene, after the first related information of the fifth object in the playing environment is acquired, if the association degree of the first related information in the playing environment and the second related information in the recording environment meets the second threshold, which indicates that the fifth object in the playing environment and the second object in the recording environment have the best matching degree, the fifth object may be determined as the target object in the current playing environment.
For example, when a teacher asks a student, on one hand, second relevant information of a second object meeting the questioning condition in the recording environment may be obtained, where the second object represents student a, and after obtaining the class information (including inattention, expression, behavior, and the like) of student a in the recording environment, according to the class information of a fifth object (such as student B) in the playing environment, if the association between student B and student a in the class information meets a second threshold, that is, the fifth object and the second object have the best matching degree, it may be determined that the fifth object is the target object; in short, when the second object represents student a, it may be considered from at least one of the identification information, the class situation information, and the learning situation information that, if the association degree between the fifth object in the playback environment (such as student B) and the second object in the recording environment (such as student a) meets the second threshold value, indicating that there is a best matching degree between the fifth object and the second object, the fifth object may be determined to be the target object. On the other hand, second relevant information of a second object which meets the question condition in the recording environment can be acquired, wherein the second object represents a question (or a knowledge point), and in this time, according to the relevance between the second object and a fifth object (such as a student B) in the playing environment, the error rate, the accuracy rate, the mastery degree and the like can be considered, and if the relevance between the student B and the question (or the knowledge point) meets a second threshold value, which indicates that the fifth object and the second object have the best matching degree, the fifth object can be determined to be the target object.
In one embodiment, the interaction of the first object with the second object may refer to a third interaction scenario between the student and the test question. For example, when a student answers a question, the interaction information of the first object and the second object may be interaction information of the student and a test question; correspondingly, the target object may be a sixth object in the playback environment.
Specifically, in the process of playing the first video content, historical behavior information of a sixth object in the playing environment is acquired at least based on the first video content; and under the condition that the similarity between the historical behavior information and the current behavior information of the first object meets a third threshold or a fourth threshold, determining that the sixth object is the target object in the current playing environment.
The current behavior information represents the behavior of the first object which meets the answering condition in the recording environment.
It should be noted that the historical behavior information may refer to a historical answer condition of a sixth object in the playing environment; in this way, if the similarity between the historical behavior information and the current behavior information of the first object meets the third threshold or the fourth threshold, which indicates that the answer information of the sixth object has the best matching degree with the answer information of the first object in the recording environment, it may be determined that the sixth object is the target object in the current playing environment.
In this way, based on the third interactive scenario, after the historical behavior information of the sixth object in the playing environment is acquired, if the similarity between the historical behavior information of the sixth object in the playing environment and the current behavior information of the first object in the recording environment meets the third threshold or the fourth threshold, which indicates that the sixth object in the playing environment and the first object in the recording environment have the best matching degree, the sixth object may be determined as the target object in the current playing environment.
For example, assuming that the test question has A, B, C, D four possible answers, based on the answer situation of the student in the recording environment, in the first video content, two answers, B and C, are originally to be explained; however, based on the answer situation of the student in the playing environment, two answers, namely A and D, may need to be explained; at this time, students with answer conditions of A or D can be selected to answer; that is to say, when the similarity between the historical behavior information of the sixth object in the playing environment and the current behavior information of the first object in the recording environment meets the third threshold or the fourth threshold, it may be determined that the sixth object is the target object in the current playing environment.
In this way, whether based on the first interactive scene, the second interactive scene or the third interactive scene, after the target object information in the current playing environment is obtained based on at least the interactive information of the first object and the second object in the first video content, the target object can be used to replace the first object and/or the second object in the first video content, so as to play the video content containing the target object.
Step 203, replacing the first object and/or the second object in the first video content with the target object based on the target object information, so as to play the video content including the target object.
In step 203, the types of the first object and the second object may be the same or different. When the first object and the second object are of the same type, the first object and the second object can be both human beings, namely a teacher and a student, or a student and a student, and the like. Alternatively, when the first object and the second object are of different types, the first object and the second object may refer to a person and an operated object, that is, a student and a test question, or a student and a knowledge point, and the like.
In addition, the target object may be one object or two objects. When the first object and the second object are of the same type, such as a teacher and a student, or a student and a student, the target object may be two objects. Alternatively, when the first object and the second object are of different types, such as students, test questions, and the like, the target object may be one object at this time.
In some embodiments, the second video content may be generated and played by replacing the first object and/or the second object in the first video content with the target object based on the target object information.
Wherein the second video content may be video content including a target object.
In one embodiment, based on the first interactive scene, the target objects are a third object and a fourth object in the playing environment; at this time, the first object and the second object in the first video content may be replaced with a third object and a fourth object in the playing environment to generate the second video content.
In one embodiment, based on the second interactive scene, the target object is a fifth object in the playing environment; at this time, when the fifth object is of the same type as the second object, the second object in the first video content may be replaced with the fifth object in the playback environment to generate the second video content.
In one embodiment, based on the second interactive scene, the target object is a fifth object in the playing environment; at this time, when the fifth object is different from the second object, the first object in the first video content may be replaced with the fifth object in the play environment to generate the second video content.
In one embodiment, based on the third interactive scene, the target object is a sixth object in the playing environment; at this time, the first object in the first video content may be replaced with a sixth object in the play environment to generate the second video content.
In some embodiments, after step 203, the processing method may further include: acquiring behavior difference parameters of the target object and the replaced object; and in the case that the behavior difference parameter meets a fifth threshold value, updating the first video content to play the updated video content including the target object.
It should be noted that after the target object and the replaced object are determined, the behavior difference parameter of the target object and the replaced object may be obtained. In this way, in the case that the behavior difference parameter meets the fifth threshold, the first video content is updated to generate the third video content, and the third video content is played. Wherein the third video content may be updated video content including the target object.
In one embodiment, based on the first interactive scene, the target objects are a third object and a fourth object in the playing environment; at this time, the behavior difference parameter between the first object and/or the second object in the first video content and the third object and the fourth object in the playing environment may be obtained, and in a case that the behavior difference parameter meets a fifth threshold, the first video content may be updated to generate the third video content.
In one embodiment, based on the second interactive scene, the target object is a fifth object in the playing environment; at this time, when the fifth object and the second object are of the same type, the behavior difference parameter between the second object in the first video content and the fifth object in the playing environment may be obtained, and the first video content is updated to generate the third video content when the behavior difference parameter meets the fifth threshold.
In one embodiment, based on the second interactive scene, the target object is a fifth object in the playing environment; at this time, when the fifth object is different from the second object, a behavior difference parameter between the first object in the first video content and the fifth object in the playing environment may be obtained, and the first video content may be updated to generate the third video content when the behavior difference parameter meets a fifth threshold.
In one embodiment, based on the third interactive scene, the target object is a sixth object in the playing environment; at this time, the behavior difference parameter between the first object in the first video content and the sixth object in the playing environment may be obtained, and the first video content may be updated to generate the third video content when the behavior difference parameter meets the fifth threshold.
For example, assuming that the test question has A, B, C, D four possible answers, based on the answer situation of the student in the recording environment, in the first video content, two answers, B and C, are originally to be explained; however, based on the answer situation of the student in the playing environment, two answers, namely A and D, may need to be explained; at this time, students with answer conditions of A or D can be selected to answer; that is to say, under the condition that the similarity between the historical behavior information of the sixth object in the playing environment and the current behavior information of the first object in the recording environment meets the third threshold or the fourth threshold, it may be determined that the sixth object is the target object in the current playing environment; at this time, since the answer information of the sixth object is different from the answer information of the first object in the first video content, after obtaining the behavior difference parameter (such as the answer information) of the sixth object and the first object, if the behavior difference parameter meets the fifth threshold, the first video content may be updated according to the answer information of the sixth object.
Referring to fig. 3, a block flow diagram of a remote teaching system provided by an embodiment of the present application is shown. As shown in fig. 3, a classroom a is a recording classroom, in which a teacher gives lessons and a camera in the classroom a records videos, so as to obtain video content 310 in a recording environment; after the video content in the recording environment is acquired, the data analysis and processing 320 is performed on the video content, so that the first video content 330 can be obtained. In fig. 3, classroom B1, classroom B2, … …, and classroom Bn are a plurality of remote classrooms; taking classroom B1 as an example, in order to improve classroom interactivity in remote teaching, relevant information 340 in the playing environment, such as learning information, class information, or identification information of multiple objects in classroom B1, may be obtained at this time; then, after the first video content and the related information under the playing environment are obtained, target object information in the playing environment can be obtained based on the interaction information of the first object and the second object in the first video content, and the first object and/or the second object in the first video content are/is replaced by the target object 350, so that the video content including the target object is obtained; finally in classroom B1, video content 360 including the target object is played; therefore, the played video content already contains the target object in the current playing environment, so that the classroom interactivity of remote teaching is improved, and the remote teaching can be closer to real teaching.
The embodiment provides a processing method, which is used for playing first video content for teaching, wherein the first video content comprises a plurality of objects in a recording environment; obtaining target object information in a current playing environment at least based on interaction information of a first object and a second object in first video content; replacing the first object and/or the second object in the first video content with the target object based on the target object information to play the video content including the target object; wherein the first object and the second object are of the same or different types. Therefore, in the playing process of the first video content, the target object information in the current playing environment can be obtained according to the interactive information of the first object and the second object in the first video content, and then the first object and/or the second object in the first video content are/is replaced by the target object information, so that the remote teaching in a remote area is realized, the interaction in the first video content for teaching is matched with the target object in the current playing environment, namely, the classroom interactivity of the remote teaching is improved, the remote teaching can be closer to the real teaching, and the efficiency of the remote teaching is further improved.
Based on the foregoing embodiments, refer to fig. 4, which shows a schematic flow chart of another processing method provided in the embodiments of the present application. As shown in fig. 4, the processing method may include the steps of:
step 401, obtaining video contents in a plurality of recording environments;
in step 401, for a plurality of recording environments, a camera is configured in each recording environment, and video content in each recording environment can be obtained through signal acquisition of the camera.
In addition, under each recording environment, a plurality of recording objects are further included, and the plurality of recording objects at least include a first object and a second object. Wherein the plurality of objects may be people, such as teachers or students; other objects, such as questions, knowledge points, test questions or answers to test questions, etc., may also be operated, and the embodiments of the present application are not limited in any way.
Step 402, obtaining analysis data in each recording environment based on video content;
in step 402, the analysis data comprises at least the relevance of the recorded objects.
Here, after obtaining video contents in a plurality of recording environments, analysis data in each recording environment can be obtained by analyzing and processing the video contents. The analysis data in each recording environment can be obtained by analyzing and processing the learning information (including the achievement status, the completion status of the historical jobs, the accuracy, the mastery degree of the knowledge points, etc.) or the lesson information (including the expressions, behaviors, attentiveness, etc.) of the recording object in the video content.
Step 403, determining target video content from the video content based on the environment to be played;
step 404, replacing content data matched with the environment to be played in other video contents to the target video content based on the correlation of the recorded object to obtain a first video content;
in the embodiment of the application, after the video content is obtained, the target video content can be determined from the video content according to the environment to be played, such as the information to be taught (including knowledge points to be explained, course content, test paper and the like) in a remote classroom; and then replacing content data matched with the environment to be played in other video contents to the target video content according to the correlation of the recording object, thereby obtaining the first video content.
Here, regarding the correlation of the recording object, the knowledge point may be used as the analysis data, for example, all related content data including a certain knowledge point are integrated to obtain the first video content; alternatively, the teacher may be used as the analysis data, for example, all the related content data of a certain teacher are integrated to obtain the first video content.
Step 405, playing the first video content;
in the embodiment of the application, after the first video content is obtained, the first video content can be played in a remote classroom needing remote teaching. For example, for an area with profound teachers, the teaching information of teachers can be recorded through videos to obtain first video contents; and then the first video content is played in a remote area or an area with weak teachers and resources through a playing end, so that educational resources can be fully utilized to enable more people to receive education, the education is not bound by geographical factors any more, and the problem of teachers and resources shortage in the remote area is solved.
Step 406, obtaining target object information in a current playing environment at least based on the interaction information of the first object and the second object in the first video content;
step 407, replacing the first object and/or the second object in the first video content with the target object based on the target object information, so as to play the video content including the target object.
In the embodiment of the present application, the first object and the second object may be of the same type or different types. When the first object and the second object are of the same type, the first object and the second object can be both human beings, namely a teacher and a student, or a student and a student, and the like. Alternatively, when the first object and the second object are of different types, the first object and the second object may refer to a person and an operated object, that is, a student and a test question, or a student and a knowledge point, and the like.
In addition, the target object may be one object or two objects. When the first object and the second object are of the same type, such as a teacher and a student, or a student and a student, the target object may be two objects. Alternatively, when the first object and the second object are of different types, such as students, test questions, and the like, the target object may be one object at this time.
It should be noted that, based on the interaction information of the first object and the second object in the first video content, the method may include: a first interactive scene, a second interactive scene and a third interactive scene. The first interaction scene mainly represents human-to-human interaction, such as interaction between students or interaction between teachers and students; the second interaction scene mainly represents the interaction between a person and an operated object (such as a question or a knowledge point and the like) when a teacher asks a student, such as the interaction between the student and the question (or the knowledge point); the third interactive scene mainly represents the interaction between a person and an operated object (such as a test question) when a student answers the test question, such as the interaction between the student and the test question.
In this way, whether based on the first interactive scene, the second interactive scene, or the third interactive scene, after obtaining the target object information in the current playing environment based on at least the interactive information of the first object and the second object in the first video content, the target object can be used to replace the first object and/or the second object in the first video content, so as to play the video content containing the target object.
The following description will be made in detail with reference to an interactive scenario in practical application.
For a first interaction scenario, refer to fig. 5, which shows a flowchart of another processing method provided in an embodiment of the present application. As shown in fig. 5, the processing method may include the steps of:
step 501, playing first video content for teaching;
step 502, determining a first interaction behavior parameter between a first object and a second object which meet an interaction condition in a recording environment;
step 503, in the process of playing the first video content, acquiring a second interaction behavior parameter between a third object and a fourth object in a playing environment at least based on the playing parameter of the first video content;
step 504, determining the third object and the fourth object as target objects in the current playing environment when detecting that the matching degree of the second interaction behavior parameter and the first interaction behavior parameter accords with a first threshold value;
in the embodiment of the application, the first object and the second object are of the same type, the interaction condition may include an interaction degree, an interaction content, an interaction plot or an interaction theme, and the interaction condition is at least related to interaction information between the first object and the second object.
It should be noted that, for step 502, the determination may be based on analyzing the first video content or during the video content recording process.
Optionally, may be determined based on analyzing the first video content; specifically, the analysis and the processing are performed according to the pre-loading information or the summary information of the first video content, so that a first interaction behavior parameter between the first object and the second object which meet the interaction condition in the recording environment can be determined.
Alternatively, it may be determined during the recording of the video content; specifically, in the recording process of the video content, information such as the interaction degree, the interaction content, the interaction plot or the interaction theme between the first object and the second object is marked, so that a first interaction behavior parameter between the first object and the second object meeting the interaction condition in the recording environment can be determined according to the marking information.
It should be further noted that, for "acquiring a second interaction behavior parameter between a third object and a fourth object in the playing environment based on the playing parameter of the first video content", the playing parameter may include a playing progress, a playing sound effect, and the like.
In this way, after the second interactive behavior parameter between the third object and the fourth object in the playing environment is obtained, if the matching degree of the second interactive behavior parameter and the first interactive behavior parameter meets the first threshold, which indicates that the third object and the fourth object in the playing environment and the first object and the second object in the recording environment have the best matching degree, the third object and the fourth object may be determined as the target object in the current playing environment.
S505, replacing a first object and a second object in the first video content with a third object and a fourth object in a playing environment to generate second video content;
s506, playing the second video content.
In this embodiment, the second video content is a video content including a third object and a fourth object.
That is, for the first interactive scene, since the first object and the second object are of the same type, at this time, after the third object and the fourth object in the playing environment are determined, the first object and the second object in the first video content may be replaced by the third object and the fourth object in the playing environment, so as to generate the second video content, and then the second video content is played; because the played second video content already comprises the third object and the fourth object in the current playing environment, the classroom interactivity of the remote teaching is improved, and the remote teaching can be closer to the real teaching.
For a second interaction scenario, refer to fig. 6, which shows a flowchart of another processing method provided in the embodiment of the present application. As shown in fig. 6, the processing method may include the steps of:
601, playing first video content for teaching;
step 602, in the process of playing the first video content, acquiring first related information of a fifth object in a playing environment at least based on the first video content;
step 603, determining the fifth object as a target object in the current playing environment under the condition that the association degree of the first related information and the second related information meets a second threshold value;
wherein the second related information is related to a first object and a second object in the second object, which meet the questioning condition, in the recording environment.
It should be noted that, when the fifth object and the second object are of the same type, if it is detected that the association degree of the fifth object and the second object in at least one of the identification information, the class situation information, and the learning situation information meets the second threshold, it may be determined that the fifth object is the target object. Alternatively, when the fifth object and the second object are of different types, if it is detected that the association degree of the fifth object and the second object meets the second threshold, it may also be determined that the fifth object is the target object.
It should be further noted that the identification information represents basic parameter information of the fifth object or the second object; here, the identification information may include information of a seat position, a school number, a name similarity, and the like. The learning situation information represents learning performance information of a fifth object or a second object; here, the learning information may include information such as a result status, completion of a historical job, accuracy, and degree of grasp of a knowledge point; the course information represents the behavior expression information of the fifth object or the second object; here, the class information may include information such as expressions, behaviors, attentions, and the like.
In this way, after the first related information of the fifth object in the playing environment is acquired, if the association degree between the first related information in the playing environment and the second related information in the recording environment meets the second threshold value, which indicates that the fifth object in the playing environment and the second object in the recording environment have the best matching degree, the fifth object may be determined as the target object in the current playing environment.
S604, replacing the first object or the second object in the first video content with a fifth object in the playing environment to generate second video content;
and S605, playing the second video content.
In an embodiment of the present application, the second video content is a video content including a fifth object.
That is, for the second interactive scene, the target object is a fifth object in the playing environment; at this time, when the fifth object is of the same type as the second object, the second object in the first video content may be replaced with the fifth object in the playing environment to generate the second video content; or, when the fifth object is different from the second object, the first object in the first video content may be replaced by the fifth object in the playing environment to generate the second video content; the played second video content already contains the fifth object in the current playing environment, so that the classroom interactivity of remote teaching is improved, and the remote teaching can be closer to real teaching.
For a third interaction scenario, refer to fig. 7, which shows a flowchart of another processing method provided in the embodiment of the present application. As shown in fig. 7, the processing method may include the steps of:
step 701, playing first video content for teaching;
step 702, in the process of playing the first video content, obtaining historical behavior information of a sixth object in a playing environment at least based on the first video content;
step 703, determining that the sixth object is a target object in the current playing environment when the similarity between the historical behavior information and the current behavior information of the first object is a third threshold or a fourth threshold;
the current behavior information represents the behavior of the first object which meets the answering condition in the recording environment.
It should be noted that the historical behavior information may refer to a historical answer condition of a sixth object in the playing environment; in this way, if the similarity between the historical behavior information and the current behavior information of the first object meets the third threshold or the fourth threshold, which indicates that the answer information of the sixth object has the best matching degree with the answer information of the first object in the recording environment, it may be determined that the sixth object is the target object in the current playing environment.
In this way, after the historical behavior information of the sixth object in the playing environment is acquired, if the similarity between the historical behavior information of the sixth object in the playing environment and the current behavior information of the first object in the recording environment meets the third threshold or the fourth threshold, which indicates that the sixth object in the playing environment and the first object in the recording environment have the best matching degree, the sixth object may be determined as the target object in the current playing environment.
S704, replacing the first object in the first video content with a sixth object in the playing environment to generate second video content;
s705, playing the second video content.
In an embodiment of the present application, the second video content is a video content including a sixth object.
Thus, for the third interactive scene, the target object is the sixth object in the playing environment; at this time, after the sixth object in the playing environment is determined, the first object in the first video content may be replaced by the sixth object in the playing environment, the second video content is generated, and then the second video content is played; the played second video content already contains the sixth object in the current playing environment, so that the classroom interactivity of remote teaching is improved, and the remote teaching can be closer to real teaching.
It should be further noted that, no matter based on the first interactive scene, or based on the second interactive scene, or based on the third interactive scene, after the target object is determined, the behavior difference parameter between the target object and the replaced object may also be obtained, and in a case that the behavior difference parameter meets a fifth threshold, the first video content is updated to generate the third video content. Wherein the third video content is an updated video content including the target object.
It should be further noted that, for the descriptions of the same or corresponding steps (or concepts) in the embodiments of the present application as in the other embodiments, reference may be made to the descriptions in the other embodiments, which are not repeated herein.
For example, taking the third interactive scenario as an example, assuming that the test question has A, B, C, D four possible answers, based on the answer situation of the student in the recording environment, in the first video content, two answers, i.e. B and C, are originally to be explained; however, based on the answer situation of the student in the playing environment, two answers, namely A and D, may need to be explained; at this time, students with answer conditions of A or D can be selected to answer; that is to say, under the condition that the similarity between the historical behavior information of the sixth object in the playing environment and the current behavior information of the first object in the recording environment meets the third threshold or the fourth threshold, it may be determined that the sixth object is the target object in the current playing environment; at this time, since the answer information of the sixth object is different from the answer information of the first object in the first video content, that is, after obtaining the behavior difference parameter (such as answer information) of the sixth object and the first object, if the behavior difference parameter meets the fifth threshold, the first video content may be updated according to the answer information of the sixth object; meanwhile, the played video content already comprises the sixth object in the current playing environment, so that the classroom interactivity of remote teaching is improved, and the remote teaching can be closer to real teaching.
The embodiment provides a processing method, which elaborates the specific implementation of the foregoing embodiment, and it can be seen that, according to the technical scheme of the foregoing embodiment, in the playing process of the first video content, target object information in the current playing environment can be obtained according to the interaction information of the first object and the second object in the first video content, and then the first object and/or the second object in the first video content are/is replaced by using the target object information, so that not only is remote teaching in a remote area realized, but also the interaction in the first video content for teaching is matched to the target object in the current playing environment, that is, the classroom interactivity of the remote teaching is improved, so that the remote teaching can be closer to real teaching, and further the efficiency of the remote teaching is improved.
Referring to fig. 8, a schematic flow chart of still another processing method provided in the embodiments of the present application is shown. As shown in fig. 8, classroom a is a recording classroom and classroom B is a remote classroom; the processing method can comprise the following steps:
s801, video recording is carried out in a classroom A;
s802, acquiring video content of a classroom A;
here, the video content of the classroom a includes a plurality of recorded objects, and also includes learning information, class information, and the like of the plurality of recorded objects. The plurality of recording objects at least comprise a first object and a second object.
S803, analyzing and processing the video content to obtain the interaction information of the first object and the second object in the video content;
here, the analyzing and processing of the video content may be performed by analyzing a pace of class attendance and interactive contents including review, knowledge point explanation, classroom interaction, question answering, exercise, and the like to obtain interactive information of the first object and the second object in the first video content.
S804, carrying out object matching on the first object and/or the second object in the classroom A and a plurality of objects in the classroom B;
s805, determining a target object in the classroom B according to a matching result;
here, before performing object matching, related information of a plurality of objects in a playing environment (such as classroom B), such as learning information (including performance status, completion of historical jobs, accuracy, degree of grasp of knowledge points, etc.) of a plurality of objects in classroom B or class information (including expressions, behaviors, attentiveness, etc.) of a plurality of objects in classroom B, may also be acquired; then, object matching is performed through data analysis in combination with learning condition information or class condition information of the recorded objects in the classroom a, so that the target object can be determined from a plurality of objects in the classroom B according to the matching result.
S806, replacing the first object and/or the second object in the video content by using the target object;
s807, the second video content is generated, and played in the classroom B.
Here, the second video content is video content including a target object in the classroom B.
It should be noted that, in the classroom a, the teacher gives lessons, and simultaneously, the video recording is performed through the camera in the classroom a, so as to obtain the video content of the classroom a; after the video content of the classroom a is acquired, data analysis and processing are performed on the video content, and interaction information of a first object and a second object in the video content can be obtained. In addition, related information of a plurality of objects in a playing environment (such as classroom B) can be obtained, such as learning information of the plurality of objects in classroom B or class information of the plurality of objects in classroom B; then, object matching is carried out through data analysis by combining learning condition information or class condition information of the recorded objects in the classroom A, so that a target object can be determined from a plurality of objects in the classroom B according to a matching result; replacing the first object and/or the second object in the video content with the target object to obtain second video content comprising the target object; finally, playing the second video content in the classroom B; therefore, the played video content already contains the target object in the current playing environment, so that the classroom interactivity of remote teaching is improved, and the remote teaching can be closer to real teaching.
The embodiment provides a processing method, which elaborates the specific implementation of the embodiment, and it can be seen that, according to the technical scheme of the embodiment, after the target object in the current playing environment is determined by matching the objects in the recording environment and the playing environment, the first object and/or the second object can be replaced by using the target object information, so that not only is remote teaching in a remote area realized, but also interaction in video content for teaching is matched with the target object in the current playing environment, and thus, classroom interactivity of remote teaching is improved, so that the remote teaching can be closer to real teaching, and further, efficiency of the remote teaching is improved.
Based on the foregoing embodiments, refer to fig. 9, which shows a schematic structural diagram of a processing apparatus provided in an embodiment of the present application. As shown in fig. 9, the processing means 9 may comprise a play unit 91, an acquisition unit 92 and a replacement unit 93, wherein,
a playing unit 91, configured to play a first video content for teaching, where the first video content includes a plurality of objects in a recording environment;
an obtaining unit 92, configured to obtain target object information in a current playing environment based on at least interaction information of a first object and a second object in the first video content;
a replacing unit 93 configured to replace a first object and/or a second object in the first video content with the target object based on the target object information to play the video content including the target object; wherein the first object and the second object are of the same or different types.
In one embodiment, the obtaining unit 92 is further configured to obtain a first video content; wherein the obtaining of the first video content comprises: obtaining video contents under a plurality of recording environments; and obtaining analysis data in each recording environment based on the video content, wherein the analysis data at least comprises the correlation of the recording objects; determining target video content from the video content based on the environment to be played; and replacing content data matched with the environment to be played in other video contents to the target video content based on the correlation of the recording object to obtain the first video content.
In one embodiment, the obtaining unit 92 is further configured to obtain interaction information of a first object and a second object in the first video content; wherein, the acquisition mode of the interaction information at least comprises one of the following items: obtaining the preloading information based on the first video content, obtaining the abstract information based on the first video content, or obtaining the marking information based on the video content recording process.
In one embodiment, referring to fig. 10, the processing means 9 may further comprise a determination unit 94 and a detection unit 95, wherein,
a determining unit 94, configured to determine a first interaction behavior parameter between a first object and a second object that meet an interaction condition in a recording environment;
the obtaining unit 92 is further configured to, in the process of playing the first video content, obtain a second interaction behavior parameter between a third object and a fourth object in a playing environment based on at least a playing parameter of the first video content;
a detecting unit 95, configured to detect that a matching degree of the second interaction behavior parameter and the first interaction behavior parameter meets a first threshold, and determine the third object and the fourth object as target objects in a current playing environment; wherein the first object and the second object are of the same type, and the interaction condition is at least related to interaction information between the first object and the second object.
In one embodiment, the obtaining unit 92 is further configured to, during the playing of the first video content, obtain first related information of a fifth object in a playing environment based on at least the first video content;
the determining unit 94 is further configured to determine the fifth object as the target object in the current playing environment if the association degree of the first related information and the second related information meets a second threshold; wherein the second related information is related to a first object and a second object in the second object, which meet the questioning condition, in the recording environment.
In an embodiment, the detecting unit 95 is further configured to detect that a degree of association between the fifth object and the second object in at least one of the identification information, the class situation information, and the learning situation information meets a second threshold, and determine that the fifth object is the target object, where the fifth object and the second object are of the same type; or, determining that the fifth object is the target object when detecting that the association degree of the fifth object and the second object meets a second threshold, wherein the fifth object and the second object are different in type.
In one embodiment, the determining unit 94 is further configured to, during the playing of the first video content, obtain historical behavior information of a sixth object in a playing environment based on at least the first video content; and determining the sixth object as a target object in the current playing environment under the condition that the similarity between the historical behavior information and the current behavior information of the first object meets a third threshold or a fourth threshold; and the current behavior information represents the behavior of the first object which meets the answering condition in the recording environment.
In an embodiment, referring to fig. 10, the processing means 9 may further comprise an updating unit 96, wherein,
the obtaining unit 92 is further configured to obtain a behavior difference parameter between the target object and the replaced object;
an updating unit 96, configured to update the first video content to play the updated video content including the target object if the behavior difference parameter meets a fifth threshold.
The present embodiment provides a processing apparatus, in which a playing unit 91 is configured to play a first video content for teaching, where the first video content includes a plurality of objects in a recording environment; an obtaining unit 92, configured to obtain target object information in a current playing environment based on at least interaction information of a first object and a second object in the first video content; a replacing unit 93 configured to replace a first object and/or a second object in the first video content with the target object based on the target object information to play the video content including the target object; wherein the first object and the second object are of the same or different types; so, not only realized the remote teaching in remote area, but also realized the interactive match that will be used for among the first video content of teaching to the target object in the current broadcast environment on, improved remote teaching's classroom interactivity promptly for remote teaching can more press close to real teaching, and then has promoted remote teaching's efficiency.
Based on the foregoing embodiments, refer to fig. 11, which shows a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 11, the electronic apparatus 11 includes, as shown in fig. 11: a processor 111, a memory 112, and a communication bus 113. Wherein:
a communication bus 113 for implementing a communication connection between the processor 111 and the memory 112;
a memory 112 for storing a computer program capable of running on the processor 111;
a processor 111 for executing, when running the computer program, to implement the steps of the processing method as shown in any one of fig. 2 to 8.
Based on the foregoing embodiments, the present application further provides a computer storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the processing method shown in any one of fig. 2 to 8.
The embodiment provides a processing device, in which a first video content for teaching is played, the first video content including a plurality of objects in a recording environment; obtaining target object information in a current playing environment at least based on interaction information of a first object and a second object in first video content; replacing the first object and/or the second object in the first video content with the target object based on the target object information to play the video content including the target object; wherein the first object and the second object are of the same or different types. So, not only realized the remote teaching in remote area, but also can be used for the interactive match among the first video content of teaching to the target object in the present broadcast environment on, improved remote teaching's classroom interactivity promptly for remote teaching can more be close to real teaching, and then has promoted remote teaching's efficiency.
The computer storage medium/Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); and may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (8)

1. A method of processing comprising
Obtaining first video content; wherein the obtaining of the first video content comprises:
obtaining video contents under a plurality of recording environments;
obtaining analysis data under each recording environment based on the video content, wherein the analysis data at least comprises the correlation of the recording objects;
determining target video content from the video content based on the environment to be played;
replacing content data matched with the environment to be played in other video contents to the target video content based on the correlation of the recording object to obtain the first video content;
playing first video content for teaching, wherein the first video content comprises a plurality of objects in a recording environment;
obtaining target object information in a current playing environment at least based on interaction information of a first object and a second object in the first video content;
replacing a first object and/or a second object in the first video content with the target object based on the target object information to play video content including the target object;
wherein the first object and the second object are of the same or different types;
acquiring behavior difference parameters of the target object and the replaced object;
and updating the first video content to play the updated video content comprising the target object under the condition that the behavior difference parameter meets a fifth threshold value.
2. The method of claim 1, further comprising, prior to the at least based on interaction information of a first object and a second object in the first video content:
acquiring interaction information of a first object and a second object in the first video content; wherein, the acquisition mode of the interaction information at least comprises one of the following items: obtaining the preloading information based on the first video content, obtaining the abstract information based on the first video content, or obtaining the marking information based on the video content recording process.
3. The method of claim 1, the obtaining target object information in a current playback environment, comprising:
determining a first interaction behavior parameter between a first object and a second object which accord with an interaction condition in a recording environment;
in the process of playing the first video content, acquiring a second interaction behavior parameter between a third object and a fourth object in a playing environment at least based on the playing parameter of the first video content;
determining the third object and the fourth object as target objects in the current playing environment when detecting that the matching degree of the second interactive behavior parameter and the first interactive behavior parameter accords with a first threshold value;
wherein the first object and the second object are of the same type, and the interaction condition is at least related to interaction information between the first object and the second object.
4. The method of claim 1, the obtaining target object information in a current playback environment, comprising:
in the process of playing the first video content, acquiring first related information of a fifth object in a playing environment at least based on the first video content;
determining the fifth object as a target object in the current playing environment under the condition that the association degree of the first related information and the second related information accords with a second threshold value;
wherein the second related information is related to a first object and a second object in the second object, which meet the questioning condition, in the recording environment.
5. The method according to claim 4, detecting that the association degree of the fifth object with the second object in at least one of the identification information, the class situation information and the learning situation information meets a second threshold, determining that the fifth object is the target object, wherein the fifth object and the second object are of the same type; or the like, or, alternatively,
and determining that the fifth object is the target object when the association degree of the fifth object and the second object is detected to meet a second threshold, wherein the fifth object and the second object are different in type.
6. The method of claim 1, the obtaining target object information in a current playback environment, comprising:
in the process of playing the first video content, acquiring historical behavior information of a sixth object in a playing environment at least based on the first video content;
determining that the sixth object is a target object in the current playing environment under the condition that the similarity between the historical behavior information and the current behavior information of the first object meets a third threshold or a fourth threshold;
and the current behavior information represents the behavior of the first object which meets the answering condition in the recording environment.
7. An electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program operable on the processor;
the processor, when running the computer program, is configured to perform the steps of the processing method of any of claims 1 to 6.
8. A processing apparatus, comprising:
the system comprises a playing unit, a recording unit and a playing unit, wherein the playing unit is used for playing first video content used for teaching, and the first video content comprises a plurality of objects in a recording environment;
the acquisition unit is used for acquiring target object information in the current playing environment at least based on the interaction information of the first object and the second object in the first video content;
a replacing unit configured to replace a first object and/or a second object in the first video content with the target object based on the target object information to play video content including the target object; wherein the first object and the second object are of the same or different types;
the obtaining unit is further used for obtaining first video content; wherein the obtaining of the first video content comprises: obtaining video contents under a plurality of recording environments; obtaining analysis data under each recording environment based on the video content, wherein the analysis data at least comprises the correlation of the recording objects; determining target video content from the video content based on the environment to be played; replacing content data matched with the environment to be played in other video contents to the target video content based on the correlation of the recording object to obtain the first video content;
the acquiring unit is further used for acquiring a behavior difference parameter of the target object and the replaced object; and updating the first video content to play the updated video content comprising the target object under the condition that the behavior difference parameter meets a fifth threshold value.
CN201911205436.3A 2019-11-29 2019-11-29 Processing method and device and electronic equipment Active CN111047930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911205436.3A CN111047930B (en) 2019-11-29 2019-11-29 Processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911205436.3A CN111047930B (en) 2019-11-29 2019-11-29 Processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111047930A CN111047930A (en) 2020-04-21
CN111047930B true CN111047930B (en) 2021-07-16

Family

ID=70234194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911205436.3A Active CN111047930B (en) 2019-11-29 2019-11-29 Processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111047930B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466240B (en) * 2022-01-27 2024-06-25 北京精鸿软件科技有限公司 Video processing method, device, medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7826644B2 (en) * 2002-12-31 2010-11-02 Rajeev Sharma Method and system for immersing face images into a video sequence
CN106686463A (en) * 2016-12-09 2017-05-17 天脉聚源(北京)传媒科技有限公司 Video role replacing method and apparatus
JP2017126899A (en) * 2016-01-14 2017-07-20 キヤノン株式会社 Image processing device and image processing method
US9799096B1 (en) * 2014-07-08 2017-10-24 Carnegie Mellon University System and method for processing video to provide facial de-identification
CN108040290A (en) * 2017-12-22 2018-05-15 四川长虹电器股份有限公司 TV programme based on AR technologies are changed face method in real time
CN108830786A (en) * 2018-06-12 2018-11-16 北京新唐思创教育科技有限公司 Computer readable storage medium, video replacement synthetic method and system
CN109788311A (en) * 2019-01-28 2019-05-21 北京易捷胜科技有限公司 Personage's replacement method, electronic equipment and storage medium

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398829B (en) * 2007-09-30 2013-10-23 国际商业机器公司 Method and apparatus for marking and modifying video, and video processing method and apparatus
BRPI1013281A2 (en) * 2009-06-13 2019-09-24 Rolestar Inc system for sequential juxtaposition of separately recorded scenes
CN102196245A (en) * 2011-04-07 2011-09-21 北京中星微电子有限公司 Video play method and video play device based on character interaction
CN103164991A (en) * 2013-03-01 2013-06-19 广州市信和电信发展有限公司 Network interactive teaching and research application system
CN103455800A (en) * 2013-09-09 2013-12-18 苏州大学 Advertisement system based on intelligent identification and method for pushing corresponding advertisement
CN104517472A (en) * 2013-09-29 2015-04-15 无敌科技(西安)有限公司 Teaching content change system and method
US9324373B2 (en) * 2013-11-21 2016-04-26 International Business Machines Corporation Determining updates for a video tutorial
CN103634503A (en) * 2013-12-16 2014-03-12 苏州大学 Video manufacturing method based on face recognition and behavior recognition and video manufacturing method based on face recognition and behavior recognition
US20150279230A1 (en) * 2014-03-26 2015-10-01 Wai Lana Productions, Llc Method for yoga instruction with media
CN204732002U (en) * 2014-12-30 2015-10-28 天津智新信息科技有限公司 A kind of autonomous lecture system
US10158826B2 (en) * 2015-10-07 2018-12-18 Reel Pro Motion, LLC System and method for recording and training athletes from multiple points of view
CN105869217B (en) * 2016-03-31 2019-03-19 南京云创大数据科技股份有限公司 A kind of virtual real fit method
CN106792147A (en) * 2016-12-08 2017-05-31 天脉聚源(北京)传媒科技有限公司 A kind of image replacement method and device
CN108958803B (en) * 2017-05-19 2022-07-29 腾讯科技(北京)有限公司 Information processing method, terminal equipment, system and storage medium
CN109040157A (en) * 2017-06-08 2018-12-18 深圳市鹰硕技术有限公司 A kind of recorded broadcast data Learning-memory behavior method Internet-based
CN107886950A (en) * 2017-12-06 2018-04-06 安徽省科普产品工程研究中心有限责任公司 A kind of children's video teaching method based on speech recognition
WO2020037681A1 (en) * 2018-08-24 2020-02-27 太平洋未来科技(深圳)有限公司 Video generation method and apparatus, and electronic device
CN109788312B (en) * 2019-01-28 2022-10-21 北京易捷胜科技有限公司 Method for replacing people in video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7826644B2 (en) * 2002-12-31 2010-11-02 Rajeev Sharma Method and system for immersing face images into a video sequence
US9799096B1 (en) * 2014-07-08 2017-10-24 Carnegie Mellon University System and method for processing video to provide facial de-identification
JP2017126899A (en) * 2016-01-14 2017-07-20 キヤノン株式会社 Image processing device and image processing method
CN106686463A (en) * 2016-12-09 2017-05-17 天脉聚源(北京)传媒科技有限公司 Video role replacing method and apparatus
CN108040290A (en) * 2017-12-22 2018-05-15 四川长虹电器股份有限公司 TV programme based on AR technologies are changed face method in real time
CN108830786A (en) * 2018-06-12 2018-11-16 北京新唐思创教育科技有限公司 Computer readable storage medium, video replacement synthetic method and system
CN109788311A (en) * 2019-01-28 2019-05-21 北京易捷胜科技有限公司 Personage's replacement method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111047930A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN106485964B (en) A kind of recording of classroom instruction and the method and system of program request
CN109801194B (en) Follow-up teaching method with remote evaluation function
CN107066619B (en) User note generation method and device based on multimedia resources and terminal
KR102013955B1 (en) Smart education system for software expert practical affairs education and estimation and method thereof
CN111242515A (en) Classroom teaching quality evaluation system and method based on education big data
US20090075247A1 (en) Interactive educational tool
CN105844991A (en) Method and device for processing answer data
US20160275810A1 (en) Integrated interactively teaching platform system
CN111462561A (en) Cloud computing-based dual-teacher classroom management method and platform
CN112887790A (en) Method for fast interacting and playing video
Rengel et al. Experiences on the design, creation, and analysis of multimedia content to promote active learning
CN116452022A (en) Teacher teaching effect evaluation method and device and electronic equipment
TW201816747A (en) Online teaching and action learning system
CN111047930B (en) Processing method and device and electronic equipment
KR20100003895A (en) The multimedia studing method which has a voip and digital image processing technology in internet environment
KR20160095543A (en) Method and system for analyzing learning activities
CN116416839A (en) Training auxiliary teaching method based on Internet of things training system
CN110826796A (en) Score prediction method
CN114691904A (en) Course resource marking method, device, system, equipment and storage medium
CN115965251A (en) Teaching evaluation method, teaching evaluation device, storage medium, and server
Khidirova Digitalization of testing and assessment
CN110853428A (en) Recording and broadcasting control method and system based on Internet of things
CN116453387B (en) AI intelligent teaching robot control system and method
Madayani et al. Exploring perception of EFL teachers towards use of media in teaching English
CN113487921B (en) Operation tutoring system and method for school class-after-class service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant