CN115050228A - Material collecting method and device and electronic equipment - Google Patents

Material collecting method and device and electronic equipment Download PDF

Info

Publication number
CN115050228A
CN115050228A CN202210683066.XA CN202210683066A CN115050228A CN 115050228 A CN115050228 A CN 115050228A CN 202210683066 A CN202210683066 A CN 202210683066A CN 115050228 A CN115050228 A CN 115050228A
Authority
CN
China
Prior art keywords
virtual
user
scene
capture device
present disclosure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210683066.XA
Other languages
Chinese (zh)
Other versions
CN115050228B (en
Inventor
杨静莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xintang Sichuang Educational Technology Co Ltd
Original Assignee
Beijing Xintang Sichuang Educational Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xintang Sichuang Educational Technology Co Ltd filed Critical Beijing Xintang Sichuang Educational Technology Co Ltd
Priority to CN202210683066.XA priority Critical patent/CN115050228B/en
Publication of CN115050228A publication Critical patent/CN115050228A/en
Application granted granted Critical
Publication of CN115050228B publication Critical patent/CN115050228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure discloses a material collecting method and device and electronic equipment, wherein the method comprises the following steps: displaying virtual acquisition equipment in a virtual education scene, wherein the virtual acquisition equipment is associated with the position of the corresponding first user virtual role; and in response to a start operation for the virtual acquisition device, controlling the virtual acquisition device to collect materials in the virtual education scene based on the position of the virtual acquisition device. The method provided by the disclosure can enable the student user to control the virtual role to collect the materials in a man-machine interaction mode, enhance the exploration interest and the autonomous learning ability explored by the student user, enhance the course experience of the student user and further improve the learning effect.

Description

Material collecting method and device and electronic equipment
Technical Field
The invention relates to the field of network education, in particular to a material collecting method and device and electronic equipment.
Background
With the rapid development of computer and network technologies, learning through the network has become a popular mode. The network learning forms are various, and a plurality of virtual scenes can be designed frequently to improve the learning interest of students.
In the related technology, a student user can log in an application client of a network learning platform and enter an education and teaching system of a virtual reality technology for learning. When the student user learns, the student user downloads the three-dimensional courseware to learn, the relevant problems are inquired, and the teacher answers on line, so that interactive teaching is performed.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a material collecting method for a virtual education scene including a first user avatar and at least one scene prop, the method including:
displaying a virtual collection device in the virtual education scene, wherein the virtual collection device is associated with the position of the virtual role of the corresponding first user;
and controlling the virtual acquisition equipment to collect materials in the virtual education scene based on the position of the virtual acquisition equipment.
According to another aspect of the present disclosure, there is provided an apparatus for collecting material, including:
a display module: the virtual education scene display device is used for displaying virtual acquisition devices in the virtual education scene based on the configuration information of the virtual acquisition devices;
a control module: the virtual acquisition equipment is used for controlling the virtual acquisition equipment to collect materials in the virtual education scene based on the position of the virtual acquisition equipment, and the virtual acquisition equipment is associated with the position of the corresponding first user virtual role.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor; and the number of the first and second groups,
a memory storing a program;
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to an exemplary embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium, the non-transitory computer readable storage medium
The readable storage medium stores computer instructions for causing the computer to perform the method according to the exemplary embodiments of the present disclosure.
According to one or more technical solutions provided in the embodiments of the present disclosure, the virtual collecting device is controlled to collect materials in the virtual education scene based on the position of the virtual collecting device, and the virtual collecting device is associated with the position of the corresponding first user virtual character, so that, when the first user virtual character moves in the virtual education scene, if a user needs to collect the materials around the first user virtual character in the virtual education scene, the virtual collecting device can be used to collect the findings of the first user virtual character moving in the virtual education scene in a manner of a material group, and provide materials for subsequent group discussions and sharing.
Moreover, the exemplary embodiment of the present disclosure displays the virtual collecting device in the virtual education scene, associates the virtual collecting device with the position of the corresponding first user virtual character, and further realizes material collection, and essentially blends the concept of the universe into the material collection process, thereby improving the experience of students in the virtual education scene, and further enhancing the exploration interest of the student users.
Drawings
Further details, features and advantages of the disclosure are disclosed in the following description of exemplary embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 shows a schematic diagram of an example system in which various methods described herein may be implemented according to an example embodiment of the present disclosure;
FIG. 2 shows a material collection method flow diagram of an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a first virtual educational scenario illustrating an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a camera setup interface of an image capture device in an exemplary embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a first user avatar and image capture device pose synchronization in accordance with an exemplary embodiment of the present disclosure;
FIG. 6A is a schematic diagram of a first position of a first user avatar in a second virtual educational scenario illustrating an embodiment of the present disclosure;
FIG. 6B is a diagram illustrating a second location of a first user avatar in a second virtual educational scenario, in accordance with an exemplary embodiment of the present disclosure;
FIG. 7A shows an initial interface schematic of an image capture device of an exemplary embodiment of the present disclosure;
fig. 7B shows a display interface schematic diagram of an image capture device in a camera state according to an exemplary embodiment of the present disclosure;
fig. 7C shows a display interface schematic diagram of an image capturing device in a video recording state according to an exemplary embodiment of the present disclosure;
FIG. 7D illustrates a diagram of invoking local camera permissions in an exemplary embodiment of the present disclosure;
FIG. 7E illustrates a presentation interface schematic of an image material library in an exemplary embodiment of the present disclosure;
fig. 8A illustrates an audio recording interface schematic diagram at the time of audio recording in an exemplary embodiment of the disclosure;
fig. 8B shows an audio recording interface schematic diagram when ending audio recording in an exemplary embodiment of the disclosure;
FIG. 8C is a schematic diagram of an audio material presentation interface in an exemplary embodiment of the present disclosure;
FIG. 9A shows a schematic view of a virtual educational scene containing dynamic scene props in an exemplary embodiment of the present disclosure;
FIG. 9B shows a schematic view of a virtual educational scene containing static scene props in an exemplary embodiment of the present disclosure;
FIG. 10A illustrates a schematic diagram of a second user task interface of an exemplary embodiment of the present disclosure;
FIG. 10B illustrates a display interface diagram of a first user when a second user accepts a task according to an exemplary embodiment of the present disclosure;
FIG. 10C illustrates a display interface diagram of a second user declining to accept a task of an exemplary embodiment of the present disclosure;
FIG. 10D is a schematic diagram illustrating a non-user avatar query interface in a virtual educational scenario in accordance with an exemplary embodiment of the present disclosure;
FIG. 11A illustrates a first sharing interface of an exemplary embodiment of the present disclosure;
FIG. 11B illustrates a second type of sharing interface of an exemplary embodiment of the present disclosure;
FIG. 11C illustrates a third sharing interface of an exemplary embodiment of the present disclosure;
fig. 12 shows a block schematic diagram of the material collecting apparatus of the exemplary embodiment of the present disclosure;
FIG. 13 shows a schematic block diagram of a chip of an exemplary embodiment of the present disclosure;
FIG. 14 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Before describing the embodiments of the present disclosure, the related terms referred to in the embodiments of the present disclosure are first explained as follows:
the metauniverse (Metaverse) is a virtual world which is linked and created by using scientific and technological means, is mapped and interacted with the real world, and is provided with a digital living space of a novel social system.
The virtual scene is a virtual scene that is displayed (or provided) when an application program runs on the terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimensions of the virtual scene are not limited in the embodiment of the present disclosure.
The Non-Player Character is a Character type in the game, is an abbreviation of "Non-Player Character", refers to a game Character which is not manipulated by a Player in the game, leads the game of the Player to progress, and is an important core Character of the game. The non-player character of the exemplary embodiments of the present disclosure refers to a non-user character.
In the related technology, a student user can log in an application client of a network learning platform and select a required course to start to learn online along with a teacher. The student user enters the system through the application client, then selects a corresponding identity (corresponding user), after the identity verification is passed, if the user is a student, the student selects to enter different virtual scenes according to the learning requirement of the student, in the virtual scenes, the student can randomly click the animals and plants in the virtual scenes, at the moment, the introduction information of the animals and plants is called out from the live-action information database and displayed, the learning of the student is facilitated, and the purpose of panoramic teaching is achieved.
In practical application, a student user can freely select animals and plants which the student user wants to learn in a virtual education scene, but the student user does not have a collection or remark function, so that the student user needs to search in the scene again during secondary learning, and time waste is caused.
In view of the above problems, exemplary embodiments of the present disclosure provide a method and an apparatus for collecting materials, so that when collecting materials, a student user collects materials by using a virtual character in a virtual scene, thereby improving an interaction substitution feeling, and enabling a experience feeling of the student user to be better and truer.
Fig. 1 shows a schematic diagram of an example system in which various methods described herein may be implemented according to an example embodiment of the present disclosure. As shown in fig. 1, a system 100 of an exemplary embodiment of the present disclosure may include: a first terminal 101, a second terminal 102 and a server 103.
In practical applications, as shown in fig. 1, the first terminal 101 and the second terminal 102 are installed with clients, which may be different. For example: the first client 1011 may be a teacher client and the second client 1021 may be a student client. The permission of the teacher client is higher than that of the student client, and various teaching tasks and management student clients can be configured.
As shown in fig. 1, when the first terminal 101 installs and runs a teacher client supporting a teacher user to perform teaching interaction, a first user interface of the teacher client is displayed on a screen of the first terminal 101; the first user interface is displayed with a virtual teaching scene, a role control, a management interface control and a message input control. The second terminal 102 is provided with a student client end supporting the student user to perform teaching interaction, and a screen of the second terminal 102 displays a second user interface of the student client end; the first user interface displays a virtual teaching scene, a role control and a message input control.
In the first user interface and the second user interface of the exemplary embodiment of the present disclosure, the virtual teaching scene may be a virtual scene related to the teaching content, or may be designed according to the teaching content. Various virtual objects are arranged in the virtual teaching scene, and the virtual objects can be virtual props or virtual characters.
In practical applications, each virtual object has its own shape and volume in the virtual scene, occupying a part of the space in the virtual scene. When the virtual teaching scene is a three-dimensional virtual teaching scene, the virtual object may be a three-dimensional stereo model, which may be a three-dimensional virtual object based on object property components it represents. The same virtual object may exhibit different appearances by wearing different skins.
For example, the virtual character of the exemplary embodiments of the present disclosure may be a virtual character participating in an instructional interaction in a virtual educational scene. The number of virtual characters participating in teaching interaction can be preset, and can also be dynamically determined according to the number of clients participating in interaction. The virtual characters may include at least user characters such as a student virtual character, a teacher virtual character, etc. controlled by the character manipulation control, or non-user characters provided in the virtual education scene for interaction.
For example, a character manipulation control can be used to control a user character, which can include a direction control and a motion control. For the direction control, the direction control can control the user role to move towards the target direction, and for the action control, the direction control can control the user role to show the preset action. For example: jumping, waving, running, nodding, and the like, but is not limited thereto. For example, the icon of the action control of the student client is a jump action icon, and when the student user clicks the jump action icon, the student user can show a jump action.
For example, the management interface control may be used to call up a management interface where a teacher user may open, close, and configure various teaching tasks and view task execution status of various teaching tasks. For example: the teaching task can be various answering tasks, and the task execution state can comprise answering result display, answering remaining time and the like. Another example is: the teaching task can be various material collection tasks, and the materials collected in the material collection task execution process can be stored in a terminal locally or stored in a material library on line.
For example, the message input control can be used for a user to input an interactive message, and the user can communicate with the interactive message through the message input control. The message input controls may include voice input controls, text input controls, and the like. In practical application, after a teacher user configures the scene prop in the management interface, a student user can freely move in a virtual education scene to collect materials.
The instructional task content of an exemplary embodiment of the present disclosure can be published in text and/or audio. When the teacher user publishes the questions in a text mode, the teacher user can input the question contents through the text input control; when the teacher user publishes in audio mode, the teacher user can input audio of the title content through the voice input control. When publishing concurrently in text and audio, the teacher-user can publish topics in two ways.
The first mode is as follows: the teacher user inputs the collected content through the text input control, the teaching task can be displayed in the virtual education scene, and meanwhile the server can convert the collected content into audio to be broadcast to the student user side. For example: the server can control the non-user character to play the audio of the title content.
The second mode is as follows: the teacher user can input the audio frequency of the collected content through the voice input control, the server broadcasts the audio frequency of the collected content to each student client, and meanwhile the server can convert the audio frequency of the collected content into texts to be displayed in a virtual education scene.
In an alternative, as shown in fig. 1, the clients installed on the first terminal 101 and the second terminal 102 may be based on the same type of application on the same or different operating system platforms (android, IOS, hua hong meng system, etc.). The first terminal 101 may generally refer to one of a plurality of terminals and the second terminal 102 may generally refer to another of the plurality of terminals. The present embodiment is illustrated with only the first terminal 101 and the second terminal 102. The device types of the first terminal 101 and the second terminal 102 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, a digital player, a laptop portable computer, and a desktop computer.
In an alternative, as shown in fig. 1, the first terminal 101 and the second terminal 102 may be connected to the server 103 through a wireless network or a wired network. The server 103 includes at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The server 103 is used for providing background service for online teaching interaction. The server 103 undertakes primary computing work, and the terminal undertakes secondary computing work; or, the server 103 undertakes the secondary computing work, and the terminal undertakes the primary computing work; or, the server 103 and the terminal perform cooperative computing by using a distributed computing architecture.
In an alternative, as shown in fig. 1, the server 103 includes a memory 1031, a processor 1032, a user account database 1033, a task service module 1034, and an Input/Output Interface (I/O Interface) facing to a user. The processor 1032 is configured to load an instruction stored in the server 103, and process data in the user account database 1033 and the task service module 1034; the user account database 1033 is configured to store data of user accounts used by the first terminal 101 and the second terminal 102, such as a head portrait of the user account, a nickname of the user account, a rating of the user account, and a service area where the user account is located; the task service module 1034 is configured to provide a plurality of virtual teaching scenes for the user to select, such as a desert scene, a tropical rainforest scene, or a space teaching scene; the user-oriented I/O interface 1035 is used to establish communication with the first terminal 101 and/or the second terminal 102 through a wireless network or a wired network to exchange data.
The material collection method provided by the exemplary embodiment of the disclosure can be applied to a virtual education scene, and users participating in interaction can comprise a teacher user and/or at least one student user. When a teacher user logs in a teacher client, the teacher user can select to start the access authority of a certain virtual teaching scene according to the course arrangement. At this time, whether the teacher user or the student user enters the virtual teaching scene. At this time, as shown in fig. 1, the server 103 in the server 103 may search the account information of the teacher user and the account information of the student users from the user account database 1033, and display a teacher avatar in the virtual teaching scene based on the account information of the teacher user and display a student avatar in the virtual teaching scene based on the account information of the student users. After that, the teacher user can control the virtual role of the teacher through the role control of the teacher client, and the content to be expressed by the virtual role of the teacher is input through the message input control. Similarly, the student user can operate the student virtual role through the role control of the student client, and the content to be expressed by the student virtual role is input through the message control.
Illustratively, when the teacher user and the student user enter the virtual education scene, the teacher user may guide the students to gradually enter the class state in a voice manner through the message input control. For example: when the virtual education scene is a desert scene, the teacher user represents in a voice mode: the students can control the virtual roles of the students to freely move in the desert scene through the role control element, and then the students can gradually enter a classroom through the virtual roles of the students.
The material collection method of the exemplary embodiment of the present disclosure may be applied to a terminal or a chip in a terminal. The terminal may be the first terminal or the second terminal as described above. The method of the exemplary embodiment of the present disclosure is explained in detail below with reference to the accompanying drawings.
The material collection method of the exemplary embodiment of the present disclosure is used in a virtual education scene including a first user virtual character and at least one scene prop. The scene prop can be a generalized scene prop and can comprise a narrow scene prop and a virtual character. The avatars may include non-user avatars and may also include other user avatars in addition to the first user avatar.
In practical applications, the virtual education scenario of the exemplary embodiment of the present disclosure may be designed according to the actual course arrangement. For example: tropical rainforest, desert, mountain glacier and other scenes, and can be configured with prop scenes therein.
Fig. 2 shows a material collection method flow diagram of an exemplary embodiment of the present disclosure. As shown in fig. 2, the material collecting method of the exemplary embodiment of the present disclosure includes:
step 201: and displaying the virtual acquisition equipment in the virtual education scene. The virtual acquisition device can be an image acquisition device or an audio acquisition device. The image acquisition device can also have the function of an audio acquisition device. The device parameters of the virtual collecting device may be preset by a server, or may be configured by a user through a client.
For example, after the virtual education scene displays the virtual capturing device, the terminal may transmit a configuration change instruction to the server in response to a configuration change operation for the virtual capturing device. Under the instruction of the configuration change instruction, the server may change the configuration information of the virtual acquisition device based on the scene segment within the collection area of the virtual acquisition device and the acquisition parameters of the virtual acquisition device. It should be understood that a scene segment herein may refer to a scene segment containing a scene prop, or may refer to a scene segment without a scene prop.
In one example, when the virtual capture device is an image capture device, the configuration change instruction includes a first change instruction, the first change instruction is used for instructing the server to change, and the category of the image capture device is different according to the category of the image capture device, and the image capture device may include at least one of a high-speed camera, an infrared thermal imaging camera, and a microscope camera.
The virtual collecting equipment control of the exemplary embodiment of the disclosure can comprise an image collecting control and an audio collecting control, wherein a first sub-control is used for collecting image information, a second sub-control is used for collecting audio information, and a student user can selectively select the first sub-control or the second sub-control to collect materials and store the collected materials in a material library so as to be convenient for the student user to call at any time. If the functionality of the second sub-control is integrated with the first sub-control, the first sub-control may also be selected without selecting the second sub-control, but without excluding the possibility of selecting the second sub-control.
Fig. 3 shows a schematic diagram of a first virtual education scene according to an exemplary embodiment of the disclosure. As shown in fig. 3, the virtual educational scene 300 can display a virtual collection device control 301, a material library control 302, a message control 303, a character manipulation control 304, a camera setting button 305, a cheetah 306, a first user virtual character 307, a second user virtual character 308, an image collection device 309; the virtual capture device control 301 includes a first sub-control 3012 for collecting image information, and a second sub-control 3011 for collecting audio information.
As shown in fig. 3, the virtual education scene 300 includes various kinds of plants, animals, rivers, and the like. The student user can control the first user avatar 307 to enter the virtual educational scene 300 through the character manipulation controls 304, such as directional controls and head controls, etc., to explore and collect materials in the virtual educational scene 300.
The mismatching of the shots can lead to poor effect of collected materials and can not meet the normal visualization. Clicking the camera settings button 305 in the upper left corner pops up the camera settings interface, as shown in fig. 3. Fig. 4 illustrates a camera setup interface of an image capture device in an exemplary embodiment of the present disclosure. As shown in fig. 4, in the camera setting interface 400, the student user can not only configure the parameters of the image capturing device through the camera setting interface 400, but also change the category of the image capturing device, thereby obtaining a material with a better effect.
In another example, when the virtual capturing device in the virtual education scene is an image capturing device, the configuration change instruction includes a second change instruction, the second change instruction is used for instructing the server to change the configuration of the image capturing device, and the lens parameter includes at least one of a focal length, an aperture and a sensitivity according to different lenses of the image capturing device.
As shown in fig. 4, parameters such as the type, the photo quality, the smart mode, the sensitivity, the focal length, and the aperture of the image capturing device can be changed in the camera setting interface 400, and based on this, the student user can change the configuration parameters of the image capturing device by changing the camera setting interface 400.
From the category of the image capturing device, as shown in fig. 3, a running cheetah 306 can be set in the virtual education scene 300, and the real life cheetah can reach 115 km at the fastest time, so that the ordinary camera cannot collect materials of the running cheetah 306. Based on this, if the image capturing device 309 cannot meet the shooting requirement of the scene prop moving at a high speed, the student user can call the camera setting interface 400 shown in fig. 4 through the camera setting button 305, and switch the category of the image capturing device 309 from the common image capturing device to the high-speed camera, so that the collected material is optimal in effect.
When the virtual education scene 300 shown in fig. 3 is at night, when the virtual education scene 300 displays that the category of the image acquisition device 309 is a common image acquisition device, the image acquisition device 309 cannot meet the material collection requirement of the scene prop. At this time, the student user may also switch the general image capturing apparatus to the infrared thermal imaging camera through the camera setting button 305.
As shown in fig. 3, when the material collection of the microscopic creatures is performed in the virtual education scene 300, when the category of the image capturing device 309 displayed in the virtual education scene 300 is the general image capturing device, the image capturing device 309 cannot meet the material collection requirement for such microscopic creature scene props. At this time, the student user may switch the general image capturing device to the microscope camera through the configuration interface, and then observe the microscopic morphology of the plant leaves in the virtual education scene 300 using the microscope camera.
The image capturing apparatus of the exemplary embodiments of the present disclosure is associated with a position of a corresponding first user avatar. That is, the image capturing device is affected by the position of the first user avatar such that the image capturing device can capture what the first user avatar sees in the virtual educational scene. It should be understood that when the image capture device is associated with the position of the respective first user avatar, the pose of the image capture device and the respective first user avatar may or may not be synchronized.
Illustratively, when the image capture device is associated with the position of the respective first user avatar, but the image capture device is synchronized with the pose of the respective first user avatar. At this time, when the image capture device is associated with the position of the corresponding first user avatar, the image capture device is in a relatively stationary state with the first user avatar as the posture of the first user avatar changes.
In one example, the avatar changes with the pose of the first user. The image capture device may not be pose-synchronized in a strict sense with the pose of the first user avatar. For example: when the image capture device is in the hand of the first user avatar. The student user can control the first user virtual character to freely move in the virtual education scene through the character control component so as to change the position and the posture of the image acquisition equipment. In this case, the image capture device is controlled by the hand pose of the first user avatar, but is not strictly synchronized with the hand pose.
For example: as shown in fig. 3, when performing material collection on a running cheetah 306 in the virtual education scene 300, the student user can manipulate the first user virtual character 307 to approach the cheetah 306 based on the character manipulation control 304, select a suitable angle to hold the image capture device 309, and then click the first sub-control 3012, so that the terminal performs material collection on the running cheetah 306 in response to the click operation of the first sub-control 3012.
In another example, the avatar changes with the pose of the first user. The image capture device may be gesture synchronized with the first user avatar in a strict sense. When the image acquisition device is an image acquisition device, the field angle of the image acquisition device can be set to be matched with the field angle of the user, so that when the posture of the first user virtual character changes, the field range acquired by the image acquisition device is synchronous with the field range seen by the student user.
Fig. 5 is a schematic diagram illustrating a manner in which a first user avatar synchronizes with an image capturing device pose according to an exemplary embodiment of the present disclosure. As shown in fig. 5, in the virtual education scene 500, the image pickup device 501 is placed behind the brain of the first user avatar 502 and follows the line of sight of the first user avatar 502. At this time, the image capturing device 501 is in a relatively stationary state with the first user avatar 502, and the image capturing device 501 is associated not only with the position of the first user avatar 502 but also with the posture of the first user avatar 502. In this case, the position and head movement of the first user avatar 502 is controlled by the avatar manipulation control 504, such as: the position of the first user avatar 502 within the virtual educational scene 500 may be controlled by the directional controls of the character manipulation control 504, and the head movement of the first user avatar 502 may be controlled by the action control of the character manipulation control 504. The student user can control the first user virtual character 502 to move freely in the virtual educational scene 500 based on the two character manipulation controls 504. In this process, since the first user avatar 502 is pose-synchronized with the image capture device 501, the field of view of the image capture device 501 may vary with the field of view of the first user avatar 502. For example, the field of view of the human eye is 124 °, and at this time, the field of view of the image capturing device 501 is also 124 °, and when the field of view of the first user avatar 502 changes, the field of view of the image capturing device 501 also changes synchronously.
As described in fig. 5, in order to ensure that the material captured by the image capturing apparatus 501 is synchronized with what the student user sees through the first user avatar 502, the lens of the image capturing apparatus 501 may be set to be oriented in the same direction as the first user avatar 502. For example, when the first user avatar 502 collects the material of the cheetah 505, since the lens of the image capturing apparatus 501 is oriented in the same direction as the first user avatar 502, the first user avatar 502 is controlled to move to the vicinity of the cheetah 505 by the direction control of the character manipulation control 504, and the head pose of the first user avatar 502 is controlled by the direction control of the character manipulation control 504, that is, the field of view of the image capturing apparatus 501 is controlled, so that the material is captured.
For example, when the first user virtual character of the exemplary embodiment of the present disclosure is in activity in the virtual education scene, the character control can be controlled by the user to adjust the activity route of the first user virtual character in the virtual education scene.
Fig. 6A is a schematic diagram illustrating a first position of a first user avatar in a second virtual educational scenario in accordance with an exemplary embodiment of the present disclosure. As shown in fig. 6A, it is assumed that the virtual education scene 600 is a desert, and there are scenes such as camel 601, cactus, etc. in the scene. When the student user controls the first user avatar 602 to perform material collection on the camel 601 in the scene, the character manipulation control 604 may be employed to manipulate the first user avatar 602 to move to an optimal material collection point. Fig. 6B is a schematic diagram illustrating a second position of the avatar of the first user in a second virtual educational scenario in accordance with an exemplary embodiment of the present disclosure. A student user controls the first user virtual character 602 to move to an optimal material collection point through the character control part 604, and when the position of the first user virtual character 602 and the position of the camel 601 are both static, material collection is directly carried out through the image collection equipment 603; when the camel 601 is in a motion state, the student user controls the motion of the first user virtual character 602, which follows the camel 601, through the character control 604, so as to collect materials.
Step 202: and in response to a start operation for the virtual acquisition device, controlling the virtual acquisition device to collect materials in the virtual education scene based on the position of the virtual acquisition device.
When the virtual capture device of the exemplary embodiment of the present disclosure is an image capture device, the image capture device may capture a picture material, and may also capture a video material, that is, a dynamic image material. The collection area of the image capturing device may be determined at least by the lens field angle of the image capturing device. When the virtual collecting device of the exemplary embodiment of the present disclosure is an audio collecting device, the audio collecting device can be independently used as a virtual collecting device, and also can integrate the functions of the audio collecting device in an image collecting device, so that the image collecting device can collect dynamic image materials and simultaneously collect audio materials.
For example, as shown in fig. 3, when a student user needs to share materials, the terminal may display a material library in response to the student user operating the material library control 302 portal, and the student user selects a target material from the material library for the sharing operation. It should be understood that the material that can be collected by the student user using the image capture device 309 in the virtual educational scene 300 is actually the material that the terminal downloads from the cloud server based on the real-time location of the first user avatar 307. Virtual collection device control 301 is used to collect material around the first user avatar 307 and save it in a material library.
In practical application, when a student user determines a collection position to collect materials, the student user controls the virtual collection equipment to collect the materials, meanwhile, the terminal receives the operation of the student user, uploads the real-time position of the first user virtual role and specific parameters of the virtual collection equipment for collecting the materials at the moment to the server, the server searches the materials from cloud storage according to preset parameters, and sends the searched materials to the terminal, so that the materials are obtained. The preset parameter may refer to a search range of the material data. From the perspective of visualization, the search range of the material data corresponds to the collection radius of the material of the virtual acquisition device.
For example, fig. 7A shows an initial interface schematic diagram of an image capturing device according to an exemplary embodiment of the present disclosure. When the first sub-control 3012 shown in fig. 3 is clicked, the initial interface 700 shown in fig. 7A may be displayed. The initial interface 700 has a shot toggle button 701, a take button 702, a photo tab 703, a video tab 704, a thumbnail 705 image of a live taken picture, an image capture device field of view control 706, and a take interface 707.
Fig. 7B shows a display interface diagram of an image capturing apparatus in a camera state according to an exemplary embodiment of the present disclosure. As shown in fig. 7B, when the student user clicks on the photo tab 703 of the display interface 700, the image capture device is in a state of collecting pictures, with the take button 702 being a camera button. When the capture button 702 is clicked, the image capture device can collect picture material and display a thumbnail 705 of the captured picture in the lower right corner of the display interface 700, and the student user can move the image capture device's capture interface 707 using controls 706 that control the image capture device's field of view. It should be appreciated that if the user were to view the currently taken picture, the user could enter the material library to view it by clicking on the thumbnail 705.
Fig. 7C shows a display interface schematic diagram of an image capturing device in a video recording state according to an exemplary embodiment of the present disclosure. As shown in fig. 7C, when the student user clicks on the video tab 704 of the display interface 700, the image capture device is in a video recording state, and the shooting button 702 becomes a video recording button. When the video recording is not performed, the video recording button is in a state of starting video recording, and a recording progress 709 is displayed on the display interface 700 in a recording duration manner. When a student user needs to collect video materials, a video recording button is clicked, the state of the video recording button is changed from a video recording state to a recording ending state, the recording progress 709 is timed, when the student user needs to finish recording, the shooting button 702 is clicked again, recording can be stopped, the recording progress 709 is stopped at the moment, and therefore the total recording duration can be determined according to the stopped recording progress 709.
The materials collected in the virtual education scene can be directly uploaded to the terminal and stored in the local album, the content of the material library in the virtual education scene is synchronous with that of the local album of the terminal, and when the materials need to be processed, the materials can be directly processed by terminal processing software. For example, when the image definition needs to be processed, the materials of the local album are called out, the image processing software is used for opening the materials to process the materials, the processed materials are stored in the local album, and the server is automatically synchronized into the material library, so that the student user can conveniently process the materials further.
In practical application, when a student user determines an image to be collected, the student user collects materials by controlling the image collecting device, meanwhile, the terminal receives the operation of the student user, uploads the real-time position of the virtual role of the first user and specific parameters of the image collecting device for collecting the materials at the moment to the server, the server searches the image materials from cloud storage according to preset parameters, and sends the searched image materials to the terminal, so that the image materials are obtained. The preset parameter may refer to a search range of the material data. From the perspective of visualization, the search range of the material data corresponds to the collection radius of the material of the image acquisition device.
Fig. 7D illustrates a diagram of invoking local camera permissions according to an exemplary embodiment of the present disclosure. As shown in fig. 7D, when the student user clicks the lens switching button 701, a popup 7010 appears on the display interface 700 to inquire whether the student user needs to invoke a local authority, and when the student user selects the determination button 70101, the image acquisition device of the terminal is activated to collect the material of the real scene; when the student user selects the cancel button 70102, the virtual capture device 707 continues to be employed to collect material in the virtual educational scene.
Fig. 7E shows a presentation interface schematic of an image material library in an exemplary embodiment of the present disclosure. As shown in fig. 7E, when the student user views the collected material, the display interface 700 may be caused to display a presentation interface 7011 of the material library as shown in fig. 7E in two ways, where the presentation interface 7011 of the material library includes an image material entry 70111 and an audio material entry 70112, and when the student user selects to delete the material, the student user clicks a delete button 70113 in the upper right corner to delete the material.
In the first way, as shown in fig. 3, the material library is used for storing the material collected by the image acquisition device 309, and the student user can directly click the material library control 302 to pop up the display interface 7011 of the material library as shown in fig. 7E, where the display interface 7011 of the material library includes an image material display interface 70111, an audio material display interface 70112 and a delete button 70113.
In the second way, as shown in fig. 7E, when the collected materials need to be viewed, a thumbnail 705 of the image acquisition interface may be clicked, the thumbnail 705 may be an entry of the material library, so as to jump to the display interface 7011 of the material library, and the student user may view the collected materials, including pictures, videos, dynamic pictures, and the like, from the display interface 7011 of the material library. It should be understood that when the material is a video, the video duration may be displayed on a thumbnail of a video file of the presentation interface 7011 of the material library. The materials collected by the student users in the virtual education scene can be directly stored in the local photo album. In the material library display interface 7011, a large picture preview can be performed by clicking a photo or a video, and material deletion can be performed by clicking a picture or a deletion button 70113 at the upper right corner.
Fig. 8A illustrates an audio recording interface schematic diagram during audio recording in an exemplary embodiment of the disclosure. As shown in fig. 8A, the audio recording interface 800 has therein a preset name 801 of an audio file, a recording progress 802, a recording button 803, and a delete button 804. As shown in fig. 8A, when a student user performs audio recording, the recording button 804 is in a recording on state, and the recording schedule 802 may display the duration of audio recording.
Fig. 8B shows an audio recording interface schematic diagram when audio recording is ended in an exemplary embodiment of the present disclosure. As shown in fig. 8B, the record button 803 may be clicked on the audio recording interface 800, and the state of the record button 803 is changed from the record on state to the record off state. The recording progress 802 displays the final time length of the audio that has been recorded, the audio recording interface during audio recording can be closed through a closing button 804 after recording is finished, and the recorded audio is automatically stored in an audio library in a material library or a local folder.
Fig. 8C shows a schematic diagram of an audio material presentation interface in an exemplary embodiment of the present disclosure. When a student user needs to retrieve collected audio materials, as shown in fig. 3, the student user can click the material library control 302 to display an interface shown in fig. 8C, as shown in fig. 8C, the student user can search for audio materials on a display interface 805 of the material library, click the audio to be played to play, and click the delete button 8051 at the upper right corner to delete the audio materials. In the audio material display interface 805, the audio to be played may be played by clicking, and the deletion operation may be performed by clicking the deletion button 8051 at the upper right corner when deletion is desired.
The material of the exemplary embodiment of the present disclosure is a material segment located in a collection area of the virtual collecting device in the virtual education scene, the material quality of the virtual object in the scene segment is determined by the distance between the first user virtual character in the scene segment and the virtual collecting device, and the material quality is negatively related to the distance.
Illustratively, when the virtual acquisition device is an image acquisition device, the display effect of the collected material in the material image is better when the scene or the scene prop in the virtual education scene is closer to the real-time position of the first user virtual character, and the display effect of the collected material in the material image is poorer when the scene or the scene prop in the virtual education scene is farther from the real-time position of the first user virtual character.
For example, the material quality includes image sharpness, and the optimum sharpness can be achieved by adjusting parameters of the image acquisition device. As shown in fig. 3, when a student user performs image acquisition on a cheetah 306, the acquired material picture includes scenes such as the cheetah 306, mountains, houses, trees, and the like, at this time, the image definition of the cheetah 306 closer to the virtual acquisition device 309 in the picture is higher than that of a remote mountain, and when the student user controls the first virtual character 307 to perform image acquisition on the remote mountain at the same position, a mountain image with higher definition can be acquired by adjusting parameters of the image acquisition device 309.
When the virtual acquisition equipment is audio acquisition equipment, the collection area is determined by the preset range of the virtual acquisition equipment, parameters of the collectable range of the audio playing props in the scene are preset, the server searches the scene props and scenery resources which can make sound in the audio acquisition radius from cloud storage according to the audio acquisition radius and the position of the audio acquisition equipment in the virtual education scene, and then sends the search result to the terminal as an audio file. If the material quality needs to be processed, the server can attenuate the sound emitted by the scene props and the scenery within the audio acquisition radius according to the distance between the server and the audio acquisition equipment before the server sends the search result.
When the audio is recorded, the server controls the scene prop to play the audio related to the scene prop under the condition that the distance between the virtual acquisition equipment and the scene prop is determined to be smaller than or equal to the preset distance. For example: the sound acquirable range of the cheetah 306 shown in fig. 3 can be determined by the radius with the real-time position of the cheetah 306 as the center, the audio of the cheetah 306 is the sound of the real scene recorded by the terminal in advance, the server controls the connection of the cheetah 306 in the virtual education scene 300 to record the sound of the cheetah 306 in the real scene recorded in advance, the audio of the terminal in the real-time position of the cheetah 306 can be acquired within the preset range, the audio associated with the cheetah 306 is automatically played by the terminal, and the audio attenuation value is preset by the server in advance.
For example: the material quality comprises sound intensity, a collection range is preset for the scene prop capable of releasing the audio, and when the real-time coordinate of the virtual character of the first user is far away from the real-time coordinate of the scene prop, but is in the collection range, the audio intensity collected by the audio collection equipment is small; when the real-time coordinates of the first user virtual character are closer to the real-time coordinates of the scene prop, the audio intensity collected by the audio acquisition equipment is higher.
As a possible implementation, the image capturing device is associated with the position of the corresponding first user avatar, and the position of the image capturing device may be related to the position of the first user avatar, but the pose is not associated.
For example, the terminal may associate the virtual capture device with the location of the virtual object in response to a trigger operation for the virtual object. The virtual object can be a scene prop or a target virtual character.
When the virtual object is a scene prop, the virtual object is a dynamic scene prop or a static scene prop. The dynamic scene prop may refer to a state of the virtual acquisition device changing when the virtual acquisition device is associated with the dynamic scene prop; the static scene prop may refer to a state of the virtual collecting device not changing when the virtual collecting device is associated with the static scene prop.
Fig. 9A shows a schematic diagram of a virtual educational scene containing dynamic scene props in an exemplary embodiment of the present disclosure. As shown in fig. 4, in the virtual education scene 900, taking fig. 9A as an example, the dynamic scene props may include a pan head 901, a virtual vehicle, and cheeks 903, and the static scene props may include stones 902, houses, lawns, and the like.
As shown in fig. 9A, the virtual education scene 900 has a pan/tilt head 901 therein, and when the distance between the first user virtual character 907 and the pan/tilt head 901 is less than or equal to the preset distance, the pop-up window 908 can jump out of the virtual education scene 900. The student user is asked with a pop-up window 908 whether the pan/tilt head 901 is enabled for material collection. When the student user clicks the determination button 9081, the cradle head 901 is started to collect the materials, at this time, the virtual collection device 904 appears on the cradle head 901, and then the student user controls the view field of the virtual collection device 904 on the cradle head 901 through the role control 906, so that the materials are collected. It should be understood that at this time, the picture or video collected by the virtual collecting device 904 includes the first user virtual character 907, and the material area collected by the pan/tilt head 901 is used as the best material collecting area that the server pretends to be good in advance; when the student user clicks the cancel button 9082, the manner in which the pan/tilt head 901 collects the material is cancelled.
Fig. 9B shows a schematic diagram of a virtual educational scene containing static scene props in an exemplary embodiment of the present disclosure. As shown in fig. 9B, when the distance between the first user avatar 907 and the stone 902 is less than or equal to the preset distance, similar to the description of fig. 9A, the display interface of the terminal jumps out of the pop-up window 908, and the user is asked whether to enable the stone 902 for material collection by using the pop-up window 908. When the student user determines to enable the stone 902 to collect material, the virtual gathering device 904 appears on the stone 902, and the student controls the virtual gathering device 904 on the stone 902 through the virtual gathering device 904 control to collect material.
As a possible implementation, there are a large number of target avatars in the virtual education scene, which do not belong to the same avatar as the first user avatar, for example: the target avatar includes a non-user avatar and a second user avatar.
If the triggering operation for the virtual object is an invitation message issued for the target virtual role, before associating the virtual collecting device with the location of the target object, the method of the exemplary embodiment of the present disclosure may further include:
step 801: responding to an invitation message sent aiming at the target virtual role, and acquiring a reply message sent by the target virtual role; if the reply message is an accept message, step 802 is executed, and if the reply message is a reject message, step 803 is executed.
Step 802: the confirmation target avatar issues an accept message for the invite message. At the moment, the server binds the relationship between the two roles, the second user can control the virtual collecting equipment to collect materials according to the material collecting mode related to the former text, and the materials are collected according to the invitation information issued by the virtual role of the first user.
Step 803: confirming that the second user avatar has sent a decline message for the invite message.
Step 804: and responding to the triggering operation aiming at one scene prop, and associating the virtual acquisition equipment with the position of the scene prop.
When the target virtual role is a second user virtual role, responding to an invitation message sent aiming at one or more second user virtual roles, the terminal uploads the invitation message to the server, and the server displays the invitation message on a task interface corresponding to the one or more second user virtual roles and prompts whether the second user receives the invitation.
FIG. 10A illustrates a schematic diagram of a second user task interface of an exemplary embodiment of the present disclosure. Fig. 10B illustrates a display interface diagram of the first user when the second user accepts the task according to the exemplary embodiment of the disclosure. As shown in fig. 10A, in response to the operation of the message control 1001 by the student user, a message popup window 1002 shown in fig. 10A pops up in the display interface 1000, the message popup window 1002 is a message popup window, the message popup window 1002 may include a chat button 10021 and a task button 10022, and the students may speak freely in the chat interface by entering the chat interface through the chat button 10021; the task interface is entered via the task button 10022, where an invitation message interface 10013 appears, and the second user is free to select a task and accept it. When the second user clicks the accept button 10024 of the message popup window 1002, as shown in fig. 10B, the server displays the accept message popup window 10026 of the second user on the display interface 1000 of the first user, and binds the relationship between the first user virtual character 1003 and the second user virtual character 1004.
When the student user controls the first user virtual character to be in the material collecting process, and the distance between the materials needing to be collected and the first user virtual character is far, the task can be issued through the public dialog box, and other student users in the virtual education scene are invited to help collect the materials. When the first user virtual character issues a task aiming at a target object in the common dialog box, other student users in the virtual education scene can check the task, click the task when the first user virtual character selects to accept the task, and then click a receiving button, so that the first user virtual character is helped to collect materials, the collected materials are uploaded through a task window and directly uploaded to a backpack of the first user virtual character, and the task is finished. The second user virtual role helps the first user virtual role to collect materials, so that the material collection efficiency is improved, the group cooperation among student users is enhanced, and the classroom atmosphere is more active.
Fig. 10C illustrates a display interface diagram of the rejection of the task by the second user according to the exemplary embodiment of the disclosure. As shown in fig. 10C, when the second user rejects to accept the task issued by the first user, the server displays a rejection message popup 10027 of the second user on the display interface 1000 of the first user, and after receiving the rejection message, the first user moves the first user avatar 1003 to the material collection area through the character control button 1005, so as to collect the material.
When the target avatar is a non-user avatar, fig. 10D illustrates a schematic diagram of a non-user avatar query interface in a virtual education scene according to an exemplary embodiment of the present disclosure. As shown in fig. 10D, in the display interface 1000, the first user avatar 1003 can move to the vicinity of the non-user avatar 1006 through the role control button 1005 and jump out of a pop-up window 10028, the first user avatar 1003 sends an inquiry to the non-user avatar 1006, and then the student user clicks the send button 10029, at this time, the virtual capture device 1007 is handed over from the first user avatar 1003 to the non-user avatar 1006, at this time, the non-user avatar 1006 receives an invitation to help the first user avatar 1003 to take a picture; when the student user chooses to give up, the student user controls the first user avatar 1003 away from the non-user avatar 1006 and is considered to give up the help automatically. It should be understood that the way in which the non-user avatar 1006 asks the first user avatar 1003 at this time may be a text selection box or a voice prompt, and the image information obtained in this way includes the first avatar.
In one implementation, the material may be shared and edited when the student user has collected the material. After the materials are collected, the teacher user opens the sharing interface entrance, the student user clicks the sharing interface entrance to enter the sharing interface, and the collected materials are uploaded to the sharing interface for sharing and discussion.
Fig. 11A illustrates a first sharing interface according to an exemplary embodiment of the present disclosure. As shown in fig. 11A, a material library 1102 and a sharing window 1101 are included in a sharing interface 1100, and a student user can drag a scroll bar 1108 in a bar interface 1107 to view the material. In the sharing stage, the student user drags the materials in the material library 1102 into the sharing window 1101, clicks the submit 1105 button to upload, clicks the cancel 1104 button when the student user needs to cancel the upload, and deletes the materials in the sharing window 1101 through the delete button 1106 in the sharing window 1101.
Fig. 11B illustrates a second sharing interface according to an exemplary embodiment of the present disclosure. As shown in fig. 11B, in the sharing interface 1100, when the student user needs to delete the material in the sharing window 1101, the student user needs to delete the material on the terminal display as long as needed, and the sharing interface 1100 pops up a trash can 1109, and further drags the deleted material to this point for deletion.
Fig. 11C illustrates a third sharing interface of an exemplary embodiment of the present disclosure. As shown in fig. 11C, when a student user needs to add a material, the user can jump to the material library by directly clicking the add button 1110 on the sharing interface 1100, and in the material library, the student user can click a material to be uploaded to add the material to the sharing window 1101, and other operations are described with reference to fig. 11A.
In one possible implementation, the present disclosure supports composing music tracks according to a given music score for motion pictures, audio, in which process a student user selects a track in a music library and then selects the materials needed to compose the track, including motion pictures, audio. It should be understood that if it is a moving picture, the system automatically invokes the audio contained in the moving picture. After the student user selects the materials, the server can automatically generate the songs. Because the music in the music library has the tone rule, the server can automatically adjust the tone of the audio to splice according to the selected audio materials to form the music.
One or more aspects provided in the disclosed exemplary embodiments display a virtual capture device in a virtual educational setting, the virtual capture device associated with a location of a respective first user avatar. Based thereon, in response to a start operation for the virtual gathering device, the position of the virtual gathering device controls the virtual gathering device to gather materials within the virtual educational scene. Therefore, the method of the exemplary embodiment of the disclosure can utilize the characteristic that the virtual collecting device collects the materials to issue the collecting task for the student, so that the student user can control the virtual character to collect the materials and share the collected materials in a man-machine interaction mode, the substitution sense and the interestingness of the collected materials are enhanced, the student user can actually participate in the material collecting link, the enthusiasm and the activeness of material collection are improved, the course experience sense of the student user is enhanced, and the learning effect is further improved.
Moreover, in the method according to the exemplary embodiment of the present disclosure, since the answer prop indicates that the content is collected for the material, the virtual collecting devices used by the virtual character may be the same or different for different subject materials. When the virtual collecting devices used by the virtual roles are different, the collected materials are different. Therefore, the virtual collection devices selected by the students in the method of the exemplary embodiment of the disclosure are reusable in the process of using the material mobile phone, can be used as plug-ins to be compatible with different virtual education scenes and different types of material collection, and reduce the requirement on hardware configuration, so that the method of the exemplary embodiment of the disclosure can be adapted to some common machine types.
The above description mainly introduces the scheme provided by the embodiments of the present disclosure from the perspective of the terminal. It is understood that the terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiment of the present disclosure may perform division of the functional units for the terminal according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiments of the present disclosure is illustrative, and is only one division of logic functions, and there may be another division in actual implementation.
In the case of adopting a method of dividing each functional module corresponding to each function, exemplary embodiments of the present disclosure provide a material collecting device, which may be a terminal or a chip applied to the terminal. Fig. 12 shows a block schematic diagram of the blocks of the material collecting apparatus of the exemplary embodiment of the present disclosure. As shown in fig. 12, the material collecting apparatus 1200 is used for a virtual education scene having at least a virtual character and at least one scene prop, the apparatus 1200 including:
a display module 1201, configured to display a virtual collection device in the virtual education scene, where the virtual collection device is associated with a location corresponding to the first user virtual character;
a control module 1202, configured to control the virtual collecting device to collect materials in the virtual education scene based on the position of the virtual collecting device.
As a possible implementation manner, the control module 1202 is further configured to control the virtual collecting device to associate the virtual collecting device with the position of the virtual object in response to a triggering operation for the virtual object before the virtual collecting device collects materials in the virtual education scene based on the position of the virtual collecting device.
As a possible implementation manner, when the virtual object is the scene prop, the virtual object is a dynamic scene prop or a static scene prop.
As a possible implementation manner, the triggering operation for the virtual object is to: the apparatus further includes a confirmation module 1203, where before the control module 1202 is configured to associate the virtual collecting device with the position of the target object, the confirmation module 1203 is configured to respond to the invitation message sent to the target virtual character and confirm that the target virtual character sends an acceptance message for the invitation message.
As a possible implementation manner, the target virtual role includes a non-user virtual role or a second user virtual role, and when the target virtual role is the second user virtual role, the control module 1202 is further configured to, before associating the positions of the virtual acquisition device and the target object, under a condition that it is confirmed that the second user virtual role sends a rejection message for the invitation message, respond to a trigger operation for one scene prop, and associate the positions of the virtual acquisition device and the scene prop.
As a possible implementation manner, the virtual collecting device is associated with the posture of the corresponding first user virtual character, and the virtual collecting device and the first user virtual character are in a relative static state as the posture of the first user virtual character changes.
As a possible implementation manner, the material is a material segment located in a collection area of the virtual acquisition device within the virtual education scene, a material quality of the virtual object within the scene segment is determined by a distance between the virtual object within the scene segment and the virtual acquisition device, and the material quality is inversely related to the distance.
As a possible implementation manner, when the virtual collecting device is an image collecting device, the collecting area is determined by at least a lens field angle of the image collecting device, and the quality of the material is image definition;
when the virtual acquisition equipment is audio acquisition equipment, the collection area is determined by the preset range of the virtual acquisition equipment, and the quality of the material is sound intensity.
As a possible implementation manner, the display module 1201 is configured to, after the virtual acquisition device is displayed in the virtual education scene, control the scene prop to play the audio associated with the scene prop when it is determined that the distance between the virtual acquisition device and the scene prop is smaller than or equal to a preset distance.
As a possible implementation manner, the apparatus further includes a sending module 1204, the displaying module 1201 is configured to, after the virtual education scene displays the virtual acquisition device, send a configuration change instruction to the server in response to a configuration change operation for the virtual acquisition device, where the configuration change instruction is configured to instruct the server to change the configuration information of the virtual acquisition device based on the scene segment in the collection area of the virtual acquisition device and the acquisition parameter of the virtual acquisition device.
As a possible implementation manner, the virtual capture device is an image capture device, the configuration change instruction includes a first change instruction for instructing a server to change category information of the image capture device,
the configuration change instruction comprises a second change instruction, and the image acquisition device parameter comprises at least one of a focal length, an aperture and sensitivity.
Fig. 13 shows a schematic block diagram of a chip of an exemplary embodiment of the present disclosure. As shown in fig. 13, the chip 1300 includes one or more (including two) processors 1301 and a communication interface 1302. The communication interface 1302 may support a server to perform the data transceiving steps in the above method, and the processor 1301 may support the server to perform the data processing steps in the above method.
Optionally, as shown in fig. 13, the chip 1300 further includes a memory 1303, and the memory 1303 may include a read-only memory and a random access memory, and provide the processor with operation instructions and data. The portion of memory may also include non-volatile random access memory (NVRAM).
In some embodiments, as shown in fig. 13, the processor 1301 executes a corresponding operation by calling an operation instruction stored in the memory (the operation instruction may be stored in an operating system). Processor 1301 controls the processing operations of any of the terminal devices, and may also be referred to as a Central Processing Unit (CPU). Memory 1303 may include read-only memory and random-access memory, and provides instructions and data to processor 1301. A portion of the memory 1303 may also include NVRAM. For example, in applications where the memory, communication interface, and memory are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, however, the various buses are identified in fig. 13 as the bus system 1304.
The method disclosed by the embodiment of the disclosure can be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an ASIC, an FPGA (field-programmable gate array) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
An exemplary embodiment of the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor, the computer program, when executed by the at least one processor, is operative to cause the electronic device to perform a method according to embodiments of the disclosure.
The disclosed exemplary embodiments also provide a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
The exemplary embodiments of the present disclosure also provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
While fig. 14 shows a block diagram of an exemplary electronic device that can be used to implement an embodiment of the disclosure, with reference to fig. 14, a block diagram of an electronic device 1400 that can be a server or a client of the disclosure, which is an example of a hardware device that can be applied to aspects of the disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the electronic device 1400 includes a computing unit 1401 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The calculation unit 1401, the ROM 1402, and the RAM 1403 are connected to each other via a bus 1404. An input/output (I/O) interface 1405 is also connected to bus 1404.
A number of components in the electronic device 1400 are connected to the I/O interface 1405, including: an input unit 1406, an output unit 1407, a storage unit 1408, and a communication unit 1409. The input unit 1406 may be any type of device capable of inputting information to the electronic device 1400, and the input unit 1406 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. Output unit 1407 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 1404 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 1409 allows the electronic device 1400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as a bluetooth (TM) device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
As shown in FIG. 14, computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The computing unit 1401 performs the respective methods and processes described above. For example, in some embodiments, the methods of the exemplary embodiments of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1408. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 1400 via the ROM 1402 and/or the communication unit 1409. In some embodiments, the computing unit 1401 may be configured to perform the method by any other suitable means (e.g. by means of firmware).
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the procedures or functions described in the embodiments of the present disclosure are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or optical media such as Digital Video Disks (DVDs); it may also be a semiconductor medium, such as a Solid State Drive (SSD).
While the disclosure has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the disclosure. Accordingly, the specification and figures are merely exemplary of the present disclosure as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present disclosure. It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (13)

1. A method of material collection for a virtual educational setting comprising a first user avatar and at least one setting prop, the method comprising:
displaying a virtual collection device in the virtual education scene, wherein the virtual collection device is associated with the position of the virtual role of the corresponding first user;
and controlling the virtual acquisition equipment to collect materials in the virtual education scene based on the position of the virtual acquisition equipment.
2. The method of claim 1, wherein prior to controlling the virtual capture device to collect material within the virtual educational scene based on the location of the virtual capture device, the method further comprises:
associating the virtual capture device with the location of the virtual object in response to a triggering operation for the virtual object.
3. The method of claim 2, wherein when the virtual object is one of the scene items, the virtual object is a dynamic scene item or a static scene item.
4. The method of claim 2, wherein the triggering for the virtual object is to: aiming at an invitation message sent by a target virtual role, wherein the target virtual role and the first user virtual role do not belong to the same virtual role, and before the positions of the virtual acquisition equipment and the target object are associated, the method further comprises the following steps:
and responding to the invitation message sent out aiming at the target virtual role, and confirming that the target virtual role sends out an acceptance message aiming at the invitation message.
5. The method of claim 4, wherein the target avatar comprises a non-user avatar or a second user avatar, and when the target avatar is the second user avatar, associating the virtual capture device with the position of the target object before associating the virtual capture device with the position of the target object comprises:
and under the condition that the virtual role of the second user sends a rejection message aiming at the invitation message, responding to the triggering operation aiming at one scene prop, and associating the virtual acquisition equipment with the position of the scene prop.
6. The method of claim 1, wherein the virtual capture device is associated with a pose of the respective first user avatar, and wherein the virtual capture device is in a relatively stationary state with the first user avatar as the pose of the first user avatar changes.
7. The method of any one of claims 1-6, wherein the material is a segment of material within the virtual educational scene that is located within a collection area of the virtual capture device, wherein a material quality of the virtual object within the scene segment is determined by a distance between the virtual object within the scene segment and the virtual capture device, and wherein the material quality is inversely related to the distance.
8. The method of claim 7, wherein when the virtual capture device is an image capture device, the collection area is determined at least by a lens field angle of the image capture device, and the material quality is image sharpness;
when the virtual acquisition equipment is audio acquisition equipment, the collection area is determined by the preset range of the virtual acquisition equipment, and the quality of the material is sound intensity.
9. The method of any of claims 1-6, wherein after the virtual educational scene displays a virtual collection device, the method further comprises:
and controlling the scene prop to play the audio associated with the scene prop under the condition that the distance between the virtual acquisition equipment and the scene prop is determined to be smaller than or equal to the preset distance.
10. The method of any of claims 1-6, wherein after the virtual educational scene displays a virtual collection device, the method further comprises:
in response to a configuration change operation for the virtual acquisition device, sending a configuration change instruction to a server, wherein the configuration change instruction is used for instructing the server to change the configuration information of the virtual acquisition device based on the scene segment in the collection area of the virtual acquisition device and the acquisition parameters of the virtual acquisition device.
11. The method of claim 10, wherein the virtual capture device is an image capture device; the configuration change instruction includes a first change instruction for instructing a server to change category information of the image capturing apparatus,
the configuration change instruction comprises a second change instruction, and the image acquisition device parameter comprises at least one of a focal length, an aperture and sensitivity.
12. An apparatus for collecting material for use in a virtual educational setting comprising a first user avatar and at least one setting prop, the apparatus comprising:
a display module: the virtual education scene display device is used for displaying virtual acquisition devices in the virtual education scene based on the configuration information of the virtual acquisition devices;
a control module: the virtual acquisition equipment is used for controlling the virtual acquisition equipment to collect materials in the virtual education scene based on the position of the virtual acquisition equipment, and the virtual acquisition equipment is associated with the position of the corresponding first user virtual role.
13. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method according to any one of claims 1 to 11.
CN202210683066.XA 2022-06-15 2022-06-15 Material collection method and device and electronic equipment Active CN115050228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210683066.XA CN115050228B (en) 2022-06-15 2022-06-15 Material collection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210683066.XA CN115050228B (en) 2022-06-15 2022-06-15 Material collection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115050228A true CN115050228A (en) 2022-09-13
CN115050228B CN115050228B (en) 2023-09-22

Family

ID=83162152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210683066.XA Active CN115050228B (en) 2022-06-15 2022-06-15 Material collection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115050228B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI800473B (en) * 2022-12-13 2023-04-21 黑洞創造有限公司 Metaverse Object Recording and Frame Re-recording System and Metaverse Object Recording and Frame Re-recording Method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109952757A (en) * 2017-08-24 2019-06-28 腾讯科技(深圳)有限公司 Method, terminal device and storage medium based on virtual reality applications recorded video
CN112291543A (en) * 2020-10-28 2021-01-29 杭州如雷科技有限公司 Projection method and system for immersive three-dimensional content
CN112462945A (en) * 2020-12-10 2021-03-09 广州工程技术职业学院 Virtual reality-based logistics port collecting operation teaching method, system and medium
CN112492097A (en) * 2020-11-26 2021-03-12 广州酷狗计算机科技有限公司 Audio playing method, device, terminal and computer readable storage medium
CN113467603A (en) * 2020-03-31 2021-10-01 北京字节跳动网络技术有限公司 Audio processing method and device, readable medium and electronic equipment
US20210337138A1 (en) * 2019-06-21 2021-10-28 Tencent Technology (Shenzhen) Company Limited Method and apparatus for controlling a plurality of virtual characters, device, and storage medium
CN113687720A (en) * 2021-08-23 2021-11-23 大连东软信息学院 Multi-person online virtual reality education system and use method thereof
CN114404973A (en) * 2021-12-28 2022-04-29 网易(杭州)网络有限公司 Audio playing method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109952757A (en) * 2017-08-24 2019-06-28 腾讯科技(深圳)有限公司 Method, terminal device and storage medium based on virtual reality applications recorded video
US20210337138A1 (en) * 2019-06-21 2021-10-28 Tencent Technology (Shenzhen) Company Limited Method and apparatus for controlling a plurality of virtual characters, device, and storage medium
CN113467603A (en) * 2020-03-31 2021-10-01 北京字节跳动网络技术有限公司 Audio processing method and device, readable medium and electronic equipment
CN112291543A (en) * 2020-10-28 2021-01-29 杭州如雷科技有限公司 Projection method and system for immersive three-dimensional content
CN112492097A (en) * 2020-11-26 2021-03-12 广州酷狗计算机科技有限公司 Audio playing method, device, terminal and computer readable storage medium
CN112462945A (en) * 2020-12-10 2021-03-09 广州工程技术职业学院 Virtual reality-based logistics port collecting operation teaching method, system and medium
CN113687720A (en) * 2021-08-23 2021-11-23 大连东软信息学院 Multi-person online virtual reality education system and use method thereof
CN114404973A (en) * 2021-12-28 2022-04-29 网易(杭州)网络有限公司 Audio playing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI800473B (en) * 2022-12-13 2023-04-21 黑洞創造有限公司 Metaverse Object Recording and Frame Re-recording System and Metaverse Object Recording and Frame Re-recording Method

Also Published As

Publication number Publication date
CN115050228B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
US11172012B2 (en) Co-streaming within a live interactive video game streaming service
US20210120054A1 (en) Communication Sessions Between Computing Devices Using Dynamically Customizable Interaction Environments
US10165261B2 (en) Controls and interfaces for user interactions in virtual spaces
US20180324229A1 (en) Systems and methods for providing expert assistance from a remote expert to a user operating an augmented reality device
CN104170318B (en) Use the communication of interaction incarnation
CN104363476B (en) It is a kind of based on online live active methods of forming a team, relevant apparatus and system
US11100695B1 (en) Methods and systems for creating an immersive character interaction experience
AU2017371954A1 (en) A system and method for collaborative learning using virtual reality
CN110809175B (en) Video recommendation method and device
US11741949B2 (en) Real-time video conference chat filtering using machine learning models
DE112021001301T5 (en) DIALOGUE-BASED AI PLATFORM WITH RENDERED GRAPHIC OUTPUT
CN115175751A (en) Driving virtual influencers based on predicted game activity and audience characteristics
US20230171459A1 (en) Platform for video-based stream synchronization
WO2022223029A1 (en) Avatar interaction method, apparatus, and device
US20170262877A1 (en) Virtual communication platform
WO2020063394A1 (en) Voice message display method and apparatus in application program, computer device, and computer-readable storage medium
CN112423143A (en) Live broadcast message interaction method and device and storage medium
CN115050228B (en) Material collection method and device and electronic equipment
US20230368464A1 (en) Information processing system, information processing method, and information processing program
US11527046B2 (en) Real world beacons indicating virtual locations
US20230215090A1 (en) Method and system for displaying virtual space at various point-in-times
CN112492323B (en) Live broadcast mask generation method, readable storage medium and computer equipment
WO2024007290A1 (en) Video acquisition method, electronic device, storage medium, and program product
JP2019155103A (en) Game replay method and system
US11494996B2 (en) Dynamic interaction deployment within tangible mixed reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant