WO2022142818A1 - 一种基于全息终端的5g强互动远程专递教学***的工作方法 - Google Patents

一种基于全息终端的5g强互动远程专递教学***的工作方法 Download PDF

Info

Publication number
WO2022142818A1
WO2022142818A1 PCT/CN2021/131153 CN2021131153W WO2022142818A1 WO 2022142818 A1 WO2022142818 A1 WO 2022142818A1 CN 2021131153 W CN2021131153 W CN 2021131153W WO 2022142818 A1 WO2022142818 A1 WO 2022142818A1
Authority
WO
WIPO (PCT)
Prior art keywords
teaching
holographic
classroom
lecturer
rendering
Prior art date
Application number
PCT/CN2021/131153
Other languages
English (en)
French (fr)
Inventor
杨宗凯
钟正
吴砥
吴珂
Original Assignee
华中师范大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华中师范大学 filed Critical 华中师范大学
Publication of WO2022142818A1 publication Critical patent/WO2022142818A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/10Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations all student stations being capable of presenting the same information simultaneously
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/22Processes or apparatus for obtaining an optical image from holograms
    • G03H1/2294Addressing the hologram to an active spatial light modulator
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • G03H2001/0061Adaptation of holography to specific applications in haptic applications when the observer interacts with the holobject
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • G03H2001/0088Adaptation of holography to specific applications for video-holography, i.e. integrating hologram acquisition, transmission and display
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2226/00Electro-optic or electronic components relating to digital holography
    • G03H2226/05Means for tracking the observer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation

Definitions

  • the invention belongs to the teaching application field of information technology, and more particularly relates to a working method of a 5G strong interactive remote delivery teaching system based on a holographic terminal.
  • the remote delivery teaching system can create a new type of situational teaching environment by using 5G network environment and holographic display terminal. loneliness, and the limitations of insufficient presence, improve the concentration and sense of participation of teachers and students. With the help of spatial positioning and tracking mechanism, teachers in the main classroom can interact with teachers and students in remote classrooms through multi-modal natural interaction methods such as eye movements, sight lines, and language.
  • the 5G strong interactive remote delivery teaching system based on holographic terminals is conducive to promoting education equity, narrowing the gap between urban and rural education, and improving the quality of education, and has broad application prospects.
  • the existing remote delivery teaching system is mainly converted from online live teaching, and there are the following problems: (1) Insufficient guarantee of information technology conditions: slow network speed and freezes have become constraints on online teaching, and it is difficult for weak schools to carry out high-level teaching. Online teaching activities; (2) The contextualized teaching environment is not realistic enough: Affected by network bandwidth and rendering capabilities, existing virtual reality technologies, resources, terminals and applications put a lot of rendering work on the user terminal, and it is difficult to build a flexible Contextualized teaching environment for large-scale applications; (3) Insufficient interaction: Teachers and students interact through online video, lacking multi-modal real-time and diversified methods, and it is difficult to meet the innovation needs of information-based teaching models. These defects limit the application of delivery teaching in practical teaching scenarios.
  • the present invention provides a working method of a 5G strong interactive remote delivery teaching system based on holographic terminals, and provides a new holographic presentation method and interactive form for delivery classrooms.
  • the present invention provides a 5G strong interactive remote delivery teaching system based on holographic terminals, comprising a data acquisition module, a data transmission module, a 5G cloud rendering module, a natural interaction module, a holographic display module and a teaching service module;
  • the data collection module is used to collect various teaching behavior data in the lectures and classrooms of lecturers and students, and the interaction between teachers and students;
  • the data transmission module is used to realize the audio and video streaming and holographic image data transmission between the lecture classroom, the 5G cloud rendering engine, and the holographic terminal in the listening classroom;
  • the 5G cloud rendering module is used to realize high-speed rendering of teaching video streams and holographic images on the classroom side, where the classroom side refers to the lecturer classroom and the lecture classroom;
  • the natural interaction module realizes the interaction between the teacher, the holographic teaching resources and the teaching environment in the teaching process by perceiving various interactive behaviors of the lecturer;
  • the holographic display module provides a display platform for holographic teaching resources and natural interaction
  • the teaching service module provides teaching resources, teaching behavior and process analysis, and teaching service management for various users.
  • the present invention also provides a working method of the above-mentioned 5G strong interactive remote delivery teaching system based on the holographic terminal, comprising the following steps:
  • Natural interaction collect relevant information in the holographic environment of the lecturer classroom, analyze and organize to form different categories of motion, emotion and behavior; preset spatial positioning points in the lecturer classroom, associate rich media teaching resources, and coordinate with the classroom space positioning Point registration, teachers can actively identify and trigger, and realize the interaction between teachers and students, the environment and teaching resources.
  • Holographic display create interactive and personalized virtual teaching scenes, use Unity engine and holographic rendering development kit to output virtual scenes as holographic resources; on the classroom side, build a holographic imaging environment that integrates virtual and real, Guide lecturers to pay attention to and trigger spatial positioning points in the teaching environment to achieve multimodal interaction.
  • Teaching service, application release of teaching resources includes the release, push and update of spatial positioning points of teaching resources; through statistics of the teaching situation of teachers and students before, during and after class, analyze the teaching style of the lecturer, the teaching style of the students in class The degree of investment can obtain the evaluation data of teaching emotion, behavior, effect, etc.; realize the unified management of the entire teaching service module.
  • the teaching service module supports the release and push of teaching resources and the update of spatial positioning points; by analyzing the teaching situation of teachers and students before, during and after class, analyze the teaching style of the lecturer and the involvement of students in the class to obtain the emotion of teaching. , behavior, effect and other evaluation data.
  • 5G network and holographic display technology With the maturity of 5G network and holographic display technology, the application in classroom teaching is more and more mature, and the present invention helps to meet the needs of remote delivery teaching.
  • FIG. 1 is an architecture diagram of a 5G strong interactive remote delivery teaching system based on a holographic terminal in an example of the present invention.
  • Fig. 2 is the working flow chart of the data acquisition module in the example of the present invention.
  • Fig. 3 is the working flow chart of the data transmission module in the example of the present invention.
  • FIG. 4 is a flow chart of 5G positioning work in the example of the present invention.
  • FIG. 5 is a working flowchart of the 5G cloud rendering module in the example of the present invention.
  • FIG. 6 is a working flow chart of the sensing sub-module in the example of the present invention.
  • FIG. 7 is a working flow chart of the registration sub-module in the example of the present invention.
  • FIG. 8 is a flow chart of the interactive command output work flow in the example of the present invention.
  • FIG. 9 is a working flow chart of the holographic display module in the example of the present invention.
  • FIG. 10 is a schematic diagram of the interaction between the teacher and the holographic teaching resource in the example of the present invention.
  • FIG. 11 is a flow chart of the teaching service module resource application publishing work flow in the example of the present invention.
  • FIG. 12 is a flow chart of the teaching behavior and process analysis work flow of the teaching service module in the example of the present invention.
  • Figure 13 is a flow chart of the teaching service management work flow of the teaching service module in the example of the present invention.
  • this embodiment provides a 5G strong interactive remote delivery teaching system based on holographic terminals, including a data acquisition module, a data transmission module, a 5G cloud rendering module, a natural interaction module, a holographic display module and a teaching service module;
  • the data collection module is used to collect various teaching behavior data in the lectures and classrooms of lecturers and students, and the interaction between teachers and students;
  • the data transmission module is used to realize the audio and video streaming and holographic image data transmission between the lecture classroom, the 5G cloud rendering engine, and the holographic terminal in the listening classroom;
  • the 5G cloud rendering module is used to realize high-speed rendering of teaching video streams and holographic images on the classroom side, where the classroom side refers to the lecturer classroom and the lecture classroom;
  • the natural interaction module realizes the interaction between the teacher, the holographic teaching resources and the teaching environment in the teaching process by perceiving various interactive behaviors of the lecturer;
  • the holographic display module provides a display platform for holographic teaching resources and natural interaction
  • the teaching service module provides teaching resources, teaching behavior and process analysis, and teaching service management for various users.
  • This embodiment also provides a working method of the above-mentioned 5G strong interactive remote delivery teaching system based on the holographic terminal (the working process of each module in the system is described in detail below):
  • (1) Data acquisition module Aiming at the diversified teaching behaviors of the lecturers in the teaching activities such as teacher-teaching-student-listening, teacher-student interaction and other teaching activities in the lecturer classroom, according to the data acquisition flow chart shown in Figure 2, with the help of recording and broadcasting acquisition equipment and actions, expressions, heads, sight lines, etc.
  • the sensor collects real-time teaching behavior data such as video, voice, gestures and trunk movements, facial expressions, head rotation, and eye focus of teachers in the main classroom, as well as holographic interactive images.
  • the data collection specifically includes the following steps:
  • (1-1) Voice and video data collection The data passes through the multi-stream recording mode, and the video and audio signals of the lecturer recorded by the recording and broadcasting system are integrated and synchronously recorded.
  • the voice data is stored in PCM format using the G.728 audio protocol; the HDMI video source is encoded and compressed using the HEVA video protocol.
  • the standard streaming media content of 4K resolution is generated, and the MP4 format on-demand list is generated in the teaching service module.
  • Attitude data collection Use depth sensing equipment to collect depth images of the lecturer to obtain skeleton data; use inertia-based motion capture equipment to track 25 key skeletal parts of the teacher, and collect high-density, multi-angle, and typical characteristics information.
  • the action BVH format records the actions of the lecturer and uploads it to the cloud rendering engine to assist the interaction between the subsequent data perception and the holographic environment.
  • a camera and a desktop telemetry eye tracker are used to build a gaze target tracking system.
  • the head pose estimation technology is used to obtain the spatial pose of the lecturer's head in real time, and it is synthesized and converted with the coordinate system of eye movement detection.
  • Blink compensation time parameters and tracking loss compensation timestamps record the holding time of the gaze point, quantify the visual attention of the region of interest, and describe the above key parameters in evs format, including the spatial posture of the lecturer's head, blink compensation time parameters, tracking Loss Compensation Timestamp, Hold Duration of Gaze Points, and Region of Interest Parameters.
  • the head coordinate system is established with the camera as the origin, and the quaternion of the teacher's head posture rotation is captured by the camera, and the motion direction model is constructed;
  • the "coarse-to-fine” search and matching strategy is applied to the face area, and the key points of the teacher's face in the video sequence are detected, and the 3D coordinates of the feature points such as eyes, eyebrows and mouth are extracted.
  • the AAM and CLM methods Integrate human body model and local texture model to extract multi-pose face feature points;
  • the camera is used to capture the contours of facial organs, dynamically collect expression information, and complete real-time tracking of different facial expressions.
  • the centralized wireless access network C-RAN is used in the wireless access scheme to convert 5G signals into WIFI signals
  • CPE is used as the OTN dedicated line equipment uses AP to convert signals into WIFI signals
  • wired optical fiber solutions access optical fiber signals through optical gateways, optical splitters and other equipment to realize the sharing of information resources such as data sending and receiving, channel quality, etc.
  • (2-2) 5G positioning As shown in Figure 4, through positioning measurement technologies such as WIFI, common frequency band, local area network, inertial sensors, etc., using high-frequency or millimeter wave communication methods, and using hybrid positioning algorithms to fuse position estimation, predict the transmission response target
  • the positioning results of the receiving end reduce abnormal positioning, improve the reliability and stability of positioning, and output the optimal positioning results and decision-making responses.
  • (2-3) Data transmission link The combination of central cloud and edge computing is used to connect cloud, terminal, 5G core network, and base stations to ensure that 4K ultra-high-definition video and holographic image content can be collected and transmitted immediately;
  • MEC multi-connection In-edge computing has the ability of "connection + computing + storage", which can deploy services to the edge of the network (such as the UPF side of the 5G network), share more core network traffic and computing power, and transmit signals after being accepted by the base station. , processed directly in the MEC server, and adjusted the transmission situation in real time according to the rules to optimize the data transmission and signal processing flow.
  • Cloud decoding and cloud encoding are a pair of opposite processes.
  • the cloud rendering module accepts the audio and video streams and holographic images transmitted through the 5G network, and combines other collected interactive data to complete the decoding work.
  • the main steps include entropy decoding, prediction, reverse
  • the steps of quantization, inverse transformation and loop filter are as follows:
  • the decoder obtains the compressed bit stream
  • the decoder uses the header information decoded from the bitstream to generate a predicted macroblock P, consistent with the predicted macroblock P previously generated in the encoder.
  • GPU cloud rendering It includes functions such as rendering scheduling, GPU computing power, and rendering engine, and supports the rendering of uploaded classroom-side audio and video streams and holographic images.
  • rendering scheduling GPU computing power
  • rendering engine supports the rendering of uploaded classroom-side audio and video streams and holographic images.
  • the rendering engine creates multiple threads for parallel execution.
  • the rendering information is determined according to the width and height of the received video image, and the sending thread and the interaction thread are simultaneously started, and the following steps are performed in parallel.
  • the sending thread judges whether the number of compressed frames contained in the current H265 video block reaches one-sixth of the FPS, and if so, sets B to true, adds a file to the end of the MP4 video container buffer, and adjusts the header file related information. Simultaneously initialize MP4 video container and compression encoder parameters, set B to be false; execute step IV in a loop until the sending thread ends.
  • asynchronous rendering technology is adopted to ensure that the MTP between cloud rendering and holographic display is less than or equal to 20ms, that is, the holographic image on the classroom holographic terminal and the frame being rendered by the cloud rendering engine are kept no more than 2 frames. ensure the visual synchronization between the lecturer classroom and the listening classroom.
  • Natural interaction module Collect relevant information in the holographic environment of the lecturer classroom, analyze and sort out the characteristics of the lecturer, and form different categories of sports, emotions and behaviors; according to the teaching requirements, preset spatial positioning points in the lecturer classroom, associate rich media teaching resources, and communicate with classroom spatial positioning point registration, teachers can actively identify and trigger spatial positioning points; according to the type of interactive input, various operation commands that can interact with holographic images are generated to realize the interaction between teachers and students, the environment and teaching resources.
  • (4-1) Perception sub-module Collect information about virtual objects, virtual scenes and lecturers in the holographic imaging environment of the lecture classroom; with the support of the teaching strategy library, behavior rule base, and domain knowledge base, analyze and sort out the characteristics of lecturers to form different movements, emotions, and behaviors Classification.
  • a holographic display environment is constructed in the main classroom. Through the spatial positioning point, the main teacher can preset and locate the teaching resources to a certain position in the classroom space, which can be called conveniently and flexibly during the teaching process, improving the loading speed and reducing the time for triggering the holographic scene. and steps, .
  • the lecturer creates and edits the rich media teaching resources suitable for the display of the holographic imaging environment, registers the virtual content in the real environment, and sets the triggering conditions to complete the association between the teaching resources and the real space positioning points, using JSON format Record the corresponding storage information and upload it to the cloud.
  • Interactive commands According to the interactive input type of the lecturer, call the execution rules corresponding to input features such as voice, gesture, torso, line of sight, head, etc., to generate push, pull, and shake that can interact with the holographic teaching resource image associated with the spatial positioning point. , move, drag and other operation commands.
  • (4-3-2) Command output According to interactive commands such as gestures and postures, the lecturer can select, rotate, zoom, move, show/hide, animation playback and other related holographic teaching content on spatial positioning points, and autonomously jump and switch the interface, scene and holographic teaching environment.
  • the model realizes the interaction with the environment and teaching resources.
  • Holographic display module It includes sub-modules such as holographic teaching resource creation, holographic imaging environment construction, and holographic interaction.
  • holographic teaching resource creation creation, holographic imaging environment construction, and holographic interaction.
  • create an interactive and personalized virtual teaching scene use the Unity engine and the holographic rendering development kit to output the virtual scene as a holographic resource; equip different holographic terminals in the lecture and lecture classrooms to construct A holographic imaging environment integrating virtual and real; through visual prompts, tactile feedback, voice or sound effects, it guides the lecturer, pays attention to and triggers the spatial positioning points in the teaching environment, and realizes multi-modal interaction.
  • the lecturer is equipped with a holographic head-mounted display with augmented reality function.
  • different configurations such as holographic projector, holographic LED screen, and holographic film are used; the holographic display terminal is used to construct a virtual teaching resource and the real lecture classroom.
  • the holographic teaching environment formed by the superposition of space.
  • (5-2-1) Holographic display terminal.
  • the lecturer is equipped with a holographic headset with augmented reality function, and the cloud-rendered holographic image is transmitted to the remote lecture classroom through the 5G network.
  • the teaching activities in the main classroom are reproduced three-dimensionally through holographic rendering.
  • holographic display terminal is used to construct a holographic teaching environment formed by the superposition of holographic teaching resources and real space, and an information interaction loop between the lecturer, teaching resources and the real environment is constructed; Get the same visual experience as the main instructor.
  • the holographic imaging system can make full use of the space environment of the classroom, and guide the lecturer to pay attention to and trigger the spatial positioning point in the holographic teaching environment through visual prompts, tactile feedback, voice or sound effects, and present the associated holographic teaching resource videos. Teaching process and interaction with teaching resources.
  • the lecturer when the lecturer is teaching geography, he can project the earth model into the lecturer's classroom.
  • the earth model can be zoomed in, zoomed out, and turned over, and the model can be viewed from different angles.
  • Teaching service module It includes three sub-modules: teaching resource application release, teaching behavior and process analysis, and teaching service management.
  • the application release of teaching resources includes the release and push of teaching resources and the update of spatial positioning points; by analyzing the teaching situation of teachers and students before, during and after class, analyzing the teaching style of lecturers and students' involvement in class, to obtain teaching results. Emotion, behavior, effect and other evaluation data; realize unified management of the entire teaching service module to ensure security and data integrity and consistency.
  • the teaching service module sends upgrade and update information to it through the message push mechanism, and uses the hot update method to push the course content, teaching resources, virtual scenes, etc. updated in the cloud to the data package.
  • Teaching service management Realize unified management of the entire teaching service system, including teacher authority, teaching resources, teaching activities, system settings, system maintenance, parameter settings, data backup and recovery as shown in Figure 13, to ensure the security of the teaching system and the integrity and consistency of data sex.
  • the "catalog scene tree” is used to manage the virtual teaching resources and their recorded and broadcast teaching resources according to the level of study period > subject > unit > knowledge point.
  • Each node corresponds to a teaching resource, and the node can clearly reflect its location and hierarchical relationship. , the clear structure is convenient for teachers to organize, query, download and manage.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Strategic Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于全息终端的5G强互动远程专递教学***及其工作方法,远程专递教学***包括数据采集模块、数据传输模块、5G云渲染模块、自然交互模块、全息显示模块和教学服务模块;工作方法借助录播采集设备以及传感器,实时采集主讲教室中主讲教师的多样化教学行为和全息影像;经5G网络链路,实现教室端与云服务器、渲染集群之间的数据传输;音视频流、全息影像等数据经云化解码、情景匹配、GPU实时渲染和云化编码等环节,将渲染完成的全息影像推送到教室端的全息显示终端,有助于满足远程专递教学的需要,为专递课堂提供一种新的全息呈现方式、互动形态。

Description

一种基于全息终端的5G强互动远程专递教学***的工作方法 技术领域
本发明属于信息技术的教学应用领域,更具体地,涉及一种基于全息终端的5G强互动远程专递教学***的工作方法。
背景技术
专递课堂主要针对师资力量薄弱的边远地区,令上不齐课、上不好课的农村学校能与高水平的城市中心学校通过互联网同上一堂课。远程专递教学***利用5G网络环境、全息显示终端能够创设新型的情境化教学环境,其沉浸性、交互性、想象性和智能性的特征,可打破在线直播教学将师生分割屏幕两端所导致的孤独感,以及临场感不足的局限,提高师生的专注度和参与感。借助空间定位和追踪机制,主讲教室中的教师通过眼动、视线、语言等多模态自然交互方式,达成与远程听课教室的师生互动。基于全息终端的5G强互动远程专递教学***有利于推进教育公平,缩小城乡教育差距,提高教育质量,拥有广阔的应用前景。
现有远程专递教学***主要是从在线直播教学转换而来,存在如下问题:(1)信息技术条件保障不足:网速慢、卡顿成为在线教学的制约因素,薄弱学校较难开展高水平的在线教学活动;(2)情境化教学环境不够逼真:受网络带宽和渲染能力的影响,现有虚拟现实技术、资源、终端和应用均将大量的渲染工作放在用户终端,较难构建一个可较大规模应用的情境化教学环境;(3)互动性不足:师生之间通过在线视频进行互动,缺乏多模态实时、多样化的方式,难以满足信息化教学模式的创新需要。这些缺陷限制专递教学在实际教学场景中的应用。
发明内容
针对现有技术的以上缺陷或改进需求,本发明提供了一种基于全息终端的5G强互动远程专递教学***的工作方法,为专递课堂提供一种新的全息呈现方式、互动形态。
本发明的目的是通过以下技术措施实现的。
本发明提供一种基于全息终端的5G强互动远程专递教学***,包括数据采集模块、数据传输模块、5G云渲染模块、自然交互模块、全息显示模块和教学服务模块;
所述数据采集模块用于采集主讲教室与听课教室中师讲生听、师生互动环节的各种教学行为数据;
所述数据传输模块用于实现主讲教室与5G云渲染引擎、听课教室全息终端之间的音视频流、全息影像数据传输;
所述5G云渲染模块用于实现教室端教学视频流和全息影像的高速渲染,所述教室端是指主讲教室与听课教室;
所述自然交互模块通过感知主讲教师的各种交互行为,实现教学过程中教师与全息教学资源、教学环境之间互动;
所述全息显示模块为全息教学资源和自然交互提供展示平台;
所述教学服务模块为各类用户提供教学资源、教学行为与过程分析、教学服务管理。
本发明还提供一种上述的基于全息终端的5G强互动远程专递教学***的工作方法,包括以下步骤:
(1)数据采集,借助录播采集设备以及动作、表情、头部、视线等传感器,实时采集主讲教室中主讲教师的多样化教学行为和全息影像。
(2)数据传输,经5G网络链路,通过控制、接入、转发技术,实现教室端与云服务器、渲染集群之间的数据传输。
(3)5G云渲染,音视频流、全息影像等数据经云化解码、情景匹配、GPU实时渲染和云化编码等环节,将渲染完成的全息影像推送到教室端的全息显示终端。
(4)自然交互,收集主讲教室全息环境中的相关信息,分析、整理形成不同的运动、情感和行为分类;在主讲教室中预设空间定位点,关联富媒体教学资源,并与教室空间定位点配准,教师可主动识别、触发,实现师生与环境、教学资源的互动。
(5)全息显示,根据教学需要,创建交互式、个性化的虚拟教学场景,使用Unity引擎和全息渲染开发包,将虚拟场景输出成全息资源;在教室端,构建虚实融合的全息成像环境,引导主讲教师关注、触发教学环境中空间定位点,实现多模态的交互。
(6)教学服务,教学资源应用发布包括教学资源的发布、推送和空间定位点的更新;通过统计师生课前、课中和课后的教学情况,分析主讲教师的授课风格、学生上课的投入度,获取教学的情感、行为、效果等评价数据;实现对整个教学服务模块的统一管理。
本发明的有益效果在于:
构建一个基于全息终端的5G强互动远程专递教学***,借助录播采集设备以及动作、表情、头部、视线等传感器,实时采集主讲教室中主讲教师的多样化教学行为和全息影像;经5G网络链路,通过控制、接入、转发技术,实现教室端与云服务器、渲染集群之间的数据传输。音视频流、全息影像等数据经云化解码、情景匹配、GPU实时渲染和云化编码 等环节,将渲染完成的全息影像推送到教室端的全息终端。收集、分析主讲教室全息环境中的相关信息,归纳形成不同的运动、情感和行为分类;在主讲教室中预设空间定位点,关联富媒体教学资源,并与之配准,教师可主动识别、触发,实现师生与环境、教学资源的互动。根据教学需要,创建交互式、个性化的虚拟教学场景,使用Unity引擎和全息渲染开发包,将虚拟场景输出成全息资源;在教室端,构建虚实融合的全息成像环境,引导主讲教师关注、触发教学环境中空间定位点,实现多模态的交互。教学服务模块支持教学资源的发布、推送和空间定位点的更新;通过统计师生课前、课中和课后的教学情况,分析主讲教师的授课风格、学生上课的投入度,获取教学的情感、行为、效果等评价数据。随着5G网络、全息显示技术的日臻成熟,在课堂教学的应用越来越成熟,本发明有助于满足远程专递教学的需要。
附图说明
图1是本发明实例中基于全息终端的5G强互动远程专递教学***架构图。
图2是本发明实例中数据采集模块工作流程图。
图3是本发明实例中数据传输模块工作流程图。
图4是本发明实例中5G定位工作流程图。
图5是本发明实例中5G云渲染模块工作流程图。
图6是本发明实例中感知子模块工作流程图。
图7是本发明实例中配准子模块工作流程图。
图8是本发明实例中交互命令输出工作流程图。
图9是本发明实例中全息显示模块工作流程图。
图10是本发明实例中教师与全息教学资源的互动示意图。
图11是本发明实例中教学服务模块资源应用发布工作流程图。
图12是本发明实例中教学服务模块教学行为与过程分析工作流程图。
图13是本发明实例中教学服务模块教学服务管理工作流程图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施案例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施案例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只 要彼此之间未构成冲突就可以相互组合。
如图1所示,本实施例提供一种基于全息终端的5G强互动远程专递教学***,包括数据采集模块、数据传输模块、5G云渲染模块、自然交互模块、全息显示模块和教学服务模块;
所述数据采集模块用于采集主讲教室与听课教室中师讲生听、师生互动环节的各种教学行为数据;
所述数据传输模块用于实现主讲教室与5G云渲染引擎、听课教室全息终端之间的音视频流、全息影像数据传输;
所述5G云渲染模块用于实现教室端教学视频流和全息影像的高速渲染,所述教室端是指主讲教室与听课教室;
所述自然交互模块通过感知主讲教师的各种交互行为,实现教学过程中教师与全息教学资源、教学环境之间互动;
所述全息显示模块为全息教学资源和自然交互提供展示平台;
所述教学服务模块为各类用户提供教学资源、教学行为与过程分析、教学服务管理。
本实施例还提供一种上述的基于全息终端的5G强互动远程专递教学***的工作方法(以下具体说明***中各模块的工作流程):
(1)数据采集模块。针对主讲教室中师讲生听、师生互动等教学活动中主讲教师的多样化教学行为,按照图2所示的数据采集流程图,借助录播采集设备以及动作、表情、头部、视线等传感器,实时采集主讲教室中教师的视频、语音、手势和躯干动作、面部表情、头部转动、视线聚焦等教学行为数据,以及全息互动影像。所述数据采集具体包括如下步骤:
(1-1)语音、视频数据采集。数据通过多流录制模式,将录播***摄录的主讲教室视频、音频信号整合同步录制形式:采用G.728音频协议将语音数据存储为PCM格式;采用HEVA视频协议,将HDMI视频源编码压缩成4K分辨率的标准化流媒体内容,在教学服务模块生成MP4格式的点播列表。
(1-2)姿态数据采集。运用深度传感设备采集主讲教师的深度图像,获取其骨架数据;使用基于惯性的动作捕捉设备追踪教师的25个关键骨骼部位,进行高密度、多角度、典型特征的信息采集,采用通采的动作BVH格式记录主讲教师的动作,上传到云渲染引擎,辅助后续数据感知与全息环境的交互。
(1-3)头部与视线数据追踪。在主讲教室运用摄像机和桌面遥测式眼动仪构建注视目 标跟踪***,采用头部姿态估计技术,实时获取主讲教师头部的空间姿态,将其与眼动检测的坐标系进行合成和转换,设置眨眼补偿时间参数和追踪丢失补偿时间戳,记录凝视点的保持时长,量化感兴趣区域的视觉注意力,采用evs格式描述上述关键参数,包括主讲教师头部的空间姿态、眨眼补偿时间参数、追踪丢失补偿时间戳、凝视点的保持时长和兴趣区域参数。
I.以摄像机为原点建立头部坐标系,通过摄像头捕捉教师头部姿态旋转的四元数,构建运动方向模型;
II.综合头部姿态欧拉角,确定主讲教室教师的视线范围;
III.实时捕捉教师头部的三维姿态角,将其与眼动检测的坐标系进行合成和转换,从而实现头、眼结合的注视目标跟踪。
(1-4)表情数据采集。采用回归树的人脸对齐算法,建立级联残差回归树(GBDT),采用GBDT梯度提升决策树
Figure PCTCN2021131153-appb-000001
估计人脸关键点的位置,
Figure PCTCN2021131153-appb-000002
是第t级回归器的形状,t表示级联的级数,I为图像,r t表示当前级的回归器;
首先,通过深度摄像采用人脸区域检测技术过滤无关背景获得教师人脸区域;
然后,将“由粗到精”的搜索匹配策略用于人脸区域,检测视频序列中主讲教师人脸关键点,提取眼睛、眉毛和嘴部等特征点的3D坐标,基于AAM及CLM方法,融合人体模型与局部纹理模型,进行多姿态人脸特征点提取;
最后利用摄像头捕捉面部器官的轮廓,动态采集表情信息,完成人脸面部不同表情的实时追踪。
(1-5)全息影像采集。采用广角采集技术,多角度、多机位扫描主讲教室,实时融合主讲教师的授课视频图像;根据教室光源的方向和强度,及时调整教师的站位,保持双眼视差的均衡值,动态采集、跟踪主讲教师的授课过程。
(2)数据传输模块。将教室端采集到的数据与云渲染内容,按图3所示,经5G/WIFI/定位模块与无线路由器、5G等广域IP网络链路,通过控制、接入、转发技术,实现教室端与云服务器、渲染集群之间的数据传输,优化5G网络接入、传输路径、多条件资源部署、云端网络转发功能,保障强互动远程专递教室使用的灵活性及高效性。
(2-1)教室端5G网络接入。在主讲和听课教室端使用无线和有线两种方式接入5G网络:在无线接入方案中采用集中化无线接入网C-RAN,将5G信号转成WIFI信号;有 线专线方案中采用CPE作为OTN专线设备,运用AP将信号转成WIFI信号;有线光纤方案通过光网关、分光器等设备,接入光纤信号,实现各教室端数据收发、信道质量等信息资源共享,强化协作关系。
(2-2)5G定位:如图4所示,通过WIFI、共频带、局域网、惯性传感器等定位测量技术,采用高频或毫米波通信方法,利用混合定位算法融合位置估计,预测传输响应目标接收端的定位结果,减少异常定位,提高定位的可靠性和稳定性,输出最优的定位结果与决策响应。
(2-3)数据传输链路:采用中心云和边缘计算相结合的方式,串联云、端、5G核心网、基站,确保4K超高清视频和全息影像内容的即采即传;MEC多接入边缘计算具有“连接+计算+存储”的能力,能够将业务下沉部署到网络的边缘(如5G网络的UPF侧),分担更多的核心网流量和运算能力,传输信号被基站接受后,在MEC服务器中直接处理,并根据规则对传输情况实时调整,优化数据传输和信号处理流程。
(3)5G云渲染模块。如图5所示,主讲教室与听课教室中音视频流、全息影像等数据通过5G链路传输到云渲染模块,经云化解码、情景匹配、GPU实时渲染和云化编码等环节,将渲染完成的全息影像推送到教室端的全息显示终端。
(3-1)云化解码。云化解码与云化编码是一对相反过程,云渲染模块接受通过5G网络传输过来的音视频流和全息影像,结合采集的其他交互数据,完成解码工作,主要步骤包括熵解码、预测、反量化、反变换和环路过滤器步骤,其具体实现步骤如下:
I.在熵解码过程中,解码器取得压缩比特流;
II.数据元素经过熵解码和重排序生成一系列量化后的系数X;
III.经过反量化和反变换生成Dn`,并与编码端的Dn`保持一致;
IV.使用从比特流中解码出来的头信息,解码器生成预测宏块P,与先前在编码器中生成的预测宏块P一致。
(3-2)情境匹配。结合教学情境以及主讲教师与资源的互动行为,确定主讲教师视角下,下一帧需要呈现的全息影像画面,不同的教学情境、交互操作需要相应的更新教学场景中各个模对应位置、姿态和尺寸等缩放比例,从而确定下一帧渲染画面的内容。
(3-3)GPU云渲染。包括渲染调度、GPU算力、渲染引擎等功能,支持上传的教室端音视频流、全息影像的渲染。综合运用GPU算力,通过渲染引擎完成每帧画面内容的渲染,生成新的全息影像和声音,其具体云渲染工作流程如下:
I:渲染引擎创建多线程并行执行。在主线程设置渲染完成标记A,视频块封装完成标 记B,使A=false、B=false。根据接收到的视频图像宽和高确定渲染信息,同时开启发送线程和交互线程,并行执行下述步骤。
II:根据A判断是否开始渲染,若A为假,则进行三维场景渲染,完成渲染将A设为真;根据A和B判断是否进行压缩,若A为真、B为假,将渲染图像压缩成H265视频流的帧,放入视频容器缓冲区Buffer,将A设为假;循环步骤III直至渲染线程结束。
III:发送线程判断当前H265视频块内包含压缩帧数是否达到FPS的六分之一,若达到则将B置为真,为MP4视频容器缓冲区尾部添加文件并调整头文件相关信息。同时初始化MP4视频容器和压缩编码器参数,将B置为假;循环执行步IV直至发送线程结束。
IV.在交互线程中如果接收到交互数据,并且A为真,则将接收到的交互数据传至渲染引擎。
(3-4)云化编码。使用压缩率高、鲁棒性强的H265视频编码标准对云渲染生成的音视频流与全息影像进行编程,为保持帧间隔的稳定,采用无B帧编码的形式,其主要操作过程包括:预测、变换、量化处理、熵编码和环路过滤器。
(3-5)云、端异步渲染。为保证流畅的用户体验,采用异步渲染技术保证云渲染、全息显示之间MTP小于等于20ms,即教室全息终端上的全息影像画面与云渲染引擎正在渲染的帧之间相保持不多于2帧的差距,确保主讲教室与听课教室之间的视觉同步。
(3-6)负载均衡算法。在全息影像渲染过程中,采用基于动态预测递归深度的负载均衡算法,确保根据每颗GPU的渲染绘制性能,承担相应的负载量,保证绘制时间大体一致,令渲染***始终处于较为稳定的状态。
(4)自然交互模块。收集主讲教室全息环境中的相关信息,分析、整理主讲教师的特征,形成不同的运动、情感和行为分类;根据教学要求,在主讲教室中预设空间定位点,关联富媒体教学资源,并与教室空间定位点配准,教师可主动识别、触发空间定位点;根据交互输入类型,生成可与全息影像交互的各种操作命令,实现师生与环境、教学资源的互动。
(4-1)感知子模块。收集主讲教室全息成像环境中虚拟对象、虚拟场景和主讲教师的相关信息;在教学策略库、行为规则库、领域知识库支持下,分析、整理主讲教师的特征,形成不同的运动、情感、行为分类。
(4-1-1)信息收集。如图6所示,收集主讲教室中的相关信息,如全息成像环境中教学对象的增加、删除、修改和重组,虚拟场景的跳转、跳转次序等,以及主讲教师的身体姿态、面部表情、头部移动、视线变化等特征数据和行为变化;
(4-1-2)信息处理。根据教学目标、风格和特征,在教学策略库、行为规则库、领域知识库支持下,分析、整理主讲教师的特征数据,针对反应型、复合型、智能型等不同层次的特征数据,按照运动、情感和行为进行分类。
(4-2)配准子模块。如图7所示,构建教学场景,根据教学要求,在主讲教室中预设空间定位点,关联富媒体教学资源,并与教室空间定位点配准;教学过程中,教师可主动识别、触发空间定位点关联的教学资源,在全息显示终端呈现教学资源的全息影像。
(4-2-1)预设空间定位点。在主讲教室中构建全息显示环境,通过空间定位点,主讲教师可预先设置、定位教学资源到教室空间某个位置,在教学过程中方便、灵活地调用,提高加载速度,减少触发全息场景的时间和步骤,。
(4-2-2)关联教学资源。根据学科教学要求,主讲教师创建、编辑适应于全息成像环境展示的富媒体教学资源,在真实环境中配准虚拟内容,并设置触发条件,完成教学资源与真实空间定位点的关联,采用JSON格式记录相应存储信息,并上传到云端。
(4-2-3)空间定位点的触发。在主讲教室开展授课、答疑、互动等活动时,根据教学需要,主讲教师可主动触发空间定位点所关联的教学资源,譬如主讲教师的视线方向(正负15°范围)和距离(小于3米)满足触发阈值后,全息终端上将会呈现其所关联的教学资源的全息影像。
(4-3)交互输出命令。如图8所示,根据主讲教师的交互输入类型,调用行为规则,生成可与教学资源交互的各种操作命令;教师可以自主跳转、切换全息教学环境的界面、场景和模型,实现与环境、教学资源的互动。
(4-3-1)交互命令。根据主讲教师的交互输入类型,调用对应语音、手势、躯干、视线、头部等输入特征的执行规则,生成可与关联在空间定位点的全息教学资源影像进行各种互动的推、拉、摇、移、拽等操作命令。
(4-3-2)命令输出。根据手势、体态等交互命令,主讲教师可以选定、旋转、缩放、移动、显示/隐藏、动画播放等空间定位点上关联的全息教学内容,自主跳转、切换全息教学环境的界面、场景和模型,实现与环境、教学资源的互动。
(5)全息显示模块。包括全息教学资源创建、全息成像环境构建、全息互动等子模块。如图9所示,根据教学需要,创建交互式、个性化的虚拟教学场景,使用Unity引擎和全息渲染开发包,将虚拟场景输出成全息资源;在主讲和听课教室配备不同的全息终端,构建虚实融合的全息成像环境;通过视觉提示、触觉反馈、语音或音效方式,引导主讲教师,关注、触发教学环境中空间定位点,实现多模态的交互。
(5-1)全息教学资源的创建。根据教学需要,修改教学场景中3D模型的显示属性、音效和播放次序,完成个性化、交互式虚拟教学场景的编辑;使用Unity引擎和全息渲染开发包,将逼真度高、交互性强的虚拟教学场景输出成全息资源。
(5-1-1)交互式虚拟教学场景编辑。搭建完善的虚拟教学资源库,教师可快速查找、选择所需的虚拟教学资源;根据教学需要,修改教学场景中3D模型的几何、纹理、材质等属性,添加声音和声效,指定虚拟场景的渲染方式,设置虚拟场景的动画播放次序,完成个性化、交互式虚拟教学场景的编辑。
(5-1-2)全息教学资源的创建。使用Unity引擎和全息渲染开发包,将逼真度高、交互性强的虚拟教学场景输出成全息资源,实现操作即所得的全息教学资源的创建,将它们与空间定位点关联,根据教学需要,激发、调用、观看全息教学资源影像。
(5-2)全息成像环境构建。在主讲教室,为主讲教师配备具有增强现实功能的全息头显,在听课教室采用全息投影仪、全息LED屏、全息薄膜等不同配置形式;使用全息显示终端,构建由虚拟教学资源与主讲教室真实空间叠加形成的全息教学环境。
(5-2-1)全息显示终端。在主讲教室,为主讲教师配备具有增强现实功能的全息头显,通过5G网络将云渲染的全息画面传送到远程听课教室;在听课教室采用全息投影仪、全息LED屏、全息薄膜等配置形式,通过全息渲染方式立体再现主讲教室中的教学活动。
(5-2-2)全息教学环境的构建。在主讲教室,使用全息显示终端,构建由全息教学资源与真实空间叠加形成的全息教学环境,构建一个主讲教师、教学资源与真实环境的信息交互回路;通过设置第一视角,听课教室的师生获得与主讲教师相同的视觉体验。
(5-3)全息互动。通过视觉提示、触觉反馈、语音或音效方式,引导主讲教师关注、触发教学环境中空间定位点,使用手势、视线、语音等方式拖动、旋转、缩放眼前全息成像环境中的对象。
(5-3-1)交互引导。全息成像***可充分利用教室的空间环境,通过视觉提示、触觉反馈、语音或音效方式,引导主讲教师关注、触发全息教学环境中空间定位点,呈现所关联的全息教学资源视频,主讲教师可根据教学流程,与教学资源互动。
(5-3-2)实时交互。借助全息头显内置的传感器和定位追踪功能,捕捉主讲教师在全息教学空间中的位置和动作信息;主讲教师也能从多角度观看教学资源中虚拟对象的细节,使用手势、视线、语音等方式拖动、旋转、缩放全息环境中的虚拟对象。
如图10所示,主讲教师讲授地理课时,可将地球模型投射到主讲教室中,通过手势操作,可放大、缩小、翻转地球模型,从不同的角度观看模型。
(6)教学服务模块。包括教学资源应用发布、教学行为和过程分析、教学服务管理三大子模块。教学资源应用发布包括教学资源的发布、推送和空间定位点的更新;通过统计师生课前、课中和课后的教学情况,分析主讲教师的授课风格、学生上课的投入度,获取教学的情感、行为、效果等评价数据;实现对整个教学服务模块的统一管理,确保安全性和数据的完整性与一致性。
(6-1)教学资源应用发布。如图11所示,根据教室端采集到的配置信息,发布适配多终端的教学资源,通过消息推送和热更新机制,向客户端推送更新数据包,记录空间定位点的信息,同步更新到云端。
(6-1-1)教学资源的发布。依据教师的权限,提供不同教学资源的下载权限,参照主讲教室与听课教室终端的操作***、屏幕分辨率尺寸等属性,提供不同分辨率的内容匹配,根据空间定位点的绝对坐标完成资源的多终端适配,确保互动时不发生偏移。
(6-1-2)教学资源应用的推送。根据后台记录的教室端信息,教学服务模块通过消息推送机制向其发送升级、更新信息,采用热更新方式,将在云端更新的课程内容、教学资源、虚拟场景等,通过数据包的形式推送到教室端。
(6-1-3)空间定位点的同步更新。采用JSON格式记录主讲教师在课程资源中设置、编辑的空间定位点信息,包括ID,三维位置,全息教学场景的元素、状态、位置、姿态、缩放比例等参数;将其同步存储到云端,满足专递教室***中不同终端上相同空间位置的共享体验。
(6-2)教学行为与过程分析。如图12所示,通过统计师生课前、课中和课后的教学情况,分析主讲教师的授课风格;基于BERI模型分析听讲教室学生上课的投入度,获取教学的情感、行为、效果等评价数据。
(6-2-1)教学统计。针对本***师生所处的远程、全息教学环境,课前实时记录教师备课和学生预习的情况,课中实时查看师生操作过程,课后分析学生作业完成情况以及知识掌握牢固点与薄弱点数据,基于S-T模型分析主讲老师的授课风格。
(6-2-2)学生课堂行为画像。基于BERI模型分析听讲教室学生上课的投入度,获取教学的情感、行为、效果等评价数据;综合学生作业成绩、作业完成进度等情况,实现对每个学生的精确画像描述。
(6-2-3)教学活动优化。实时监控和计量教学活动,完成服务节点同步配置、负载均衡配置和资源监控,确保主讲教室的教学活动能顺利同步到各听讲教室;根据各教室端的IP地址,监视、判断各个节点状态,智能化地选择最优节点,提供优质资源和服务。
(6-3)教学服务管理。实现对整个教学服务***的统一管理,包括图13中教师权限、教学资源、教学活动、***设置、***维护、参数设置、数据备份和恢复,保证教学***的安全性和数据的完整性与一致性。
(6-3-1)教师权限管理。负责教师用户使用教学服务模块时的登陆、验证、计时、资源编辑、教室创建等功能,协助教师用户登录教学服务模块,通过身份验证,记录教师的使用时间,依据权限检索虚拟教学资源,创建或登陆专递教室。
(6-3-2)教学资源管理。采用“目录式场景树”按照学段>学科>单元>知识点的层次管理虚拟教学资源及其录播教学资源,每个节点都对应一个教学资源,节点可清晰反应其所在的位置和层级关系,清晰的结构便于教师的组织、查询、下载和管理。
(6-3-3)教学活动管理。在专递教室***中实现多种形态的教学活动灵活组织,支持一校带多点的形式:即支持一个主讲课堂带多个听课教室同步在线上课;或者一校带多校园,即支持多个主讲课堂带多个听课教室同步线上教学。
本说明书中未作详细描述的内容,属于本专业技术人员公知的现有技术。
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进,均应包含在本发明的保护范围之内。

Claims (1)

  1. 一种基于全息终端的5G强互动远程专递教学***,其特征在于:所述基于全息终端的5G强互动远程专递教学***包括数据采集模块、数据传输模块、5G云渲染模块、自然交互模块、全息显示模块和教学服务模块;
    所述数据采集模块用于采集主讲教室与听课教室中师讲生听、师生互动环节的各种教学行为数据;
    所述数据传输模块用于实现主讲教室与5G云渲染引擎、听课教室全息终端之间的音视频流、全息影像数据传输;
    所述5G云渲染模块用于实现教室端教学视频流和全息影像的高速渲染,所述教室端是指主讲教室与听课教室;
    所述自然交互模块通过感知主讲教师的各种交互行为,实现教学过程中教师与全息教学资源、教学环境之间互动;
    所述全息显示模块为全息教学资源和自然交互提供展示平台;所述全息显示模块在主讲教室与听课教室配备不同的全息显示终端,构建虚实融合的全息成像环境:在主讲教室为主讲教师配备具有增强现实功能的全息头显,通过5G网络将云渲染的全息画面传送到远程听课教室;在听课教室采用全息投影仪、全息LED屏、全息薄膜配置形式,通过全息渲染方式立体再现主讲教室中的教学活动;
    所述教学服务模块为各类用户提供教学资源、教学行为与过程分析、教学服务管理。
    上述基于全息终端的5G强互动远程专递教学***的工作方法包括以下步骤:
    (1)数据采集,针对主讲教室中师讲生听、师生互动教学活动中主讲教师的多样化教学行为,借助录播采集设备以及动作、表情、头部、视线传感器,实时采集主讲教室中教师的视频、语音、手势和躯干动作、面部表情、头部转动、视线聚焦数据,以及全息互动影像;
    (1-1)语音、视频数据采集,通过多流录制模式,将录播***摄录的主讲教室视频、音频信号整合同步录制形式:采用G.728音频协议将语音数据存储为PCM格式;采用HEVA视频协议,将HDMI视频源编码压缩成4K分辨率的标准化流媒体内容,在教学服务模块生成MP4格式的点播列表;
    (1-2)姿态数据采集,运用深度传感设备采集主讲教师的深度图像,获取其骨架数据;使用基于惯性的动作捕捉设备追踪教师的25个关键骨骼部位,采用通用动作BVH格式记录主讲教师的动作,上传到云渲染引擎,辅助后续数据感知与全息环境的交互;
    (1-3)头部与视线数据追踪,在主讲教室运用摄像机和桌面遥测式眼动仪构建注视目标 跟踪***,采用头部姿态估计技术,实时获取主讲教师头部的空间姿态,将其与眼动检测的坐标系进行合成和转换,设置眨眼补偿时间参数和追踪丢失补偿时间戳,记录凝视点的保持时长,量化感兴趣区域的视觉注意力,采用evs格式描述上述关键参数,包括主讲教师头部的空间姿态、眨眼补偿时间参数、追踪丢失补偿时间戳、凝视点的保持时长和兴趣区域参数;
    (1-4)表情数据采集,采用回归树的人脸对齐算法,建立级联残差回归树,检测视频序列中主讲教师人脸关键点数据,提取眼睛、眉毛和嘴部特征点的3D坐标,采用UTF8MB4格式的字符串描述面部表情捕捉信息;
    (1-5)全息影像采集,采用广角采集技术,多角度、多机位扫描主讲教室,实时融合主讲教师的授课视频图像;根据教室光源的方向和强度,及时调整教师的站位,保持双眼视差的均衡值,动态采集、跟踪主讲教师的授课过程;
    (2)数据传输,将教室端采集到的数据与云渲染内容,经5G/WIFI/定位模块与无线路由器、5G广域IP网络链路,通过控制、接入、转发技术,实现教室端与云服务器、渲染集群之间的数据传输,优化5G网络接入、传输路径、多条件资源部署、云端网络转发功能;
    (2-1)教室端5G网络接入,在主讲和听课教室端使用无线和有线两种方式接入5G网络:在无线接入方案中采用集中化无线接入网C-RAN,将5G信号转成WIFI信号;有线专线方案中采用CPE作为OTN专线设备,运用AP将信号转成WIFI信号;有线光纤方案通过光网关、分光器设备,接入光纤信号,实现各教室端数据收发、信道质量信息资源共享,强化协作关系;
    (2-2)5G定位,通过WIFI、共频带、局域网、惯性传感器定位测量技术,采用高频或毫米波通信方法,利用混合定位算法融合位置估计,预测传输响应目标接收端的定位结果,减少异常定位,提高定位的可靠性和稳定性,输出最优的定位结果与决策响应;
    (2-3)数据传输链路,采用中心云和边缘计算相结合的方式,串联云、端、5G核心网、基站,确保4K超高清视频和全息影像内容的即采即传;利用MEC边缘计算分担更多的核心网流量和运算能力,优化数据传输和信号处理流程;
    (3)5G云渲染,主讲教室、听课教室中音视频流、全息影像数据通过5G链路传输到云渲染模块,经云化解码、情景匹配、GPU实时渲染和云化编码环节,将渲染完成的全息影像推送到教室端的全息显示终端;
    (3-1)云化解码,云化解码与云化编码是一对相反过程,云渲染模块接受通过5G网络传输过来的音视频流和全息影像,结合采集的其他交互数据,完成解码工作,步骤包括熵 解码、预测、反量化、反变换和环路过滤器;
    (3-2)情境匹配,结合教学情境以及主讲教师与资源的互动行为,确定主讲教师视角下,下一帧需要呈现的全息影像画面,不同的教学情境、交互操作需要更新教学场景中对应模型的位置、姿态和缩放比例属性,从而确定下一帧渲染画面的内容;
    (3-3)GPU云渲染,包括渲染调度、GPU算力、渲染引擎功能,支持上传的教室端音视频流、全息影像的渲染,综合运用GPU算力,通过渲染引擎完成每帧画面内容的渲染,生成新的全息影像和声音;
    (3-4)云化编码,使用压缩率高、鲁棒性强的H265视频编码标准对云渲染生成的音视频流与全息影像进行编程,为保持帧间隔的稳定,采用无B帧编码形式,步骤包括:预测、变换、量化处理、熵编码和环路过滤器;
    (3-5)云、端异步渲染,为保证流畅的用户体验,采用异步渲染技术保证云渲染、全息显示之间MTP小于等于20ms,即教室全息终端上的全息影像画面与云渲染引擎正在渲染的帧之间相保持不多于2帧的差距,确保主讲教室与听课教室之间的视觉同步;
    (3-6)负载均衡算法,在全息影像渲染过程中,采用基于动态预测递归深度的负载均衡算法,确保根据每颗GPU的渲染绘制性能,承担相应的负载量,保证绘制时间大体一致,令渲染***始终处于较为稳定的状态;
    (4)自然交互,收集主讲教室全息环境中的相关信息,分析、整理主讲教师的特征,形成不同的运动、情感和行为分类;根据教学要求,在主讲教室中预设空间定位点,关联富媒体教学资源,并与教室空间定位点配准,教师可主动识别、触发空间定位点;根据交互输入类型,生成可与全息影像交互的各种操作命令,实现师生与环境、教学资源的互动;
    (4-1)感知,收集主讲教室全息成像环境中虚拟对象、虚拟场景和主讲教师的相关信息;在教学策略库、行为规则库、领域知识库支持下,分析、整理主讲教师的特征,形成不同的运动、情感、行为分类;
    (4-1-1)信息收集,收集主讲教室中的相关信息,包括全息成像环境中教学对象的增加、删除、修改和重组,虚拟场景的跳转、跳转次序,以及主讲教师的身体姿态、面部表情、头部移动、视线变化特征数据和行为变化;
    (4-1-2)信息处理,根据教学目标、风格和特征,在教学策略库、行为规则库、领域知识库支持下,分析、整理主讲教师的特征数据,针对反应型、复合型、智能型不同层次的特征数据,按照运动、情感和行为进行分类;
    (4-2)配准,构建虚实融合的教学场景,根据教学要求,在主讲教室中预设空间定位点, 关联富媒体教学资源,并与教室空间定位点配准;教学过程中,教师可主动识别、触发空间定位点关联的教学资源,在全息终端呈现教学资源的全息影像;
    (4-2-1)预设空间定位点,在主讲教室中构建全息显示环境,通过空间定位点,主讲教师可预先设置、定位教学资源到教室空间某个位置,在教学过程中方便、灵活地调用,提高加载速度,减少触发全息场景的时间和步骤;
    (4-2-2)关联教学资源,根据学科教学要求,主讲教师创建、编辑适应于全息成像环境展示的富媒体教学资源,在真实环境中配准虚拟内容,并设置触发条件,完成教学资源与真实空间定位点的关联,采用JSON格式记录相应存储信息,并上传到云端;
    (4-2-3)空间定位点的触发,在主讲教室开展授课、答疑、互动活动时,根据教学需要,主讲教师可主动触发空间定位点所关联的教学资源;
    (4-3)交互输出命令,根据主讲教师的交互输入类型,调用行为规则,生成可与教学资源交互的各种操作命令;教师可以自主跳转、切换全息教学环境的界面、场景和模型,实现与环境、教学资源的互动;
    (4-3-1)交互命令,根据主讲教师的交互输入类型,调用对应语音、手势、躯干、视线、头部输入特征的执行规则,生成可与关联在空间定位点的全息教学资源影像进行各种互动的推、拉、摇、移、拽操作命令;
    (4-3-2)命令输出,根据手势、体态交互命令,主讲教师可以选定、旋转、缩放、移动、显示/隐藏、动画播放空间定位点上关联的全息教学内容,自主跳转、切换全息教学环境的界面、场景和模型,实现与环境、教学资源的互动;
    (5)全息显示,包括全息教学资源创建、全息成像环境构建、全息互动,根据教学需要,创建交互式、个性化的虚拟教学场景,使用Unity引擎和全息渲染开发包,将虚拟场景输出成全息资源;在主讲和听课教室配备不同的全息显示终端,构建虚实融合的全息成像环境;通过视觉提示、触觉反馈、语音或音效方式,引导主讲教师,关注、触发教学环境中空间定位点,实现多模态的交互;
    (5-1)全息教学资源的创建,根据教学需要,修改教学场景中3D模型的显示属性、音效和播放次序,完成个性化、交互式虚拟教学场景的编辑;使用Unity引擎和全息渲染开发包,将逼真度高、交互性强的虚拟教学场景输出成全息资源;
    (5-1-1)交互式虚拟教学场景编辑,搭建完善的虚拟教学资源库,教师可快速查找、选择所需的虚拟教学资源;根据教学需要,修改教学场景中3D模型的几何、纹理、材质属性,添加声音和声效,指定虚拟场景的渲染方式,设置虚拟场景的动画播放次序,完成个 性化、交互式虚拟教学场景的编辑;
    (5-1-2)全息教学资源的创建,使用Unity引擎和全息渲染开发包,将逼真度高、交互性强的虚拟教学场景输出成全息资源,实现操作即所得的全息教学资源的创建,将它们与空间定位点关联,根据教学需要,激发、调用、观看全息教学资源影像;
    (5-2)全息成像环境构建,在主讲教室,为主讲教师配备具有增强现实功能的全息头显,在听课教室采用全息投影仪、全息LED屏、全息薄膜不同配置形式;使用全息显示终端,构建由虚拟教学资源与主讲教室真实空间叠加形成的全息教学环境;
    (5-2-1)全息教学环境的构建,在主讲教室,使用全息显示终端,构建由全息教学资源与真实空间叠加形成的全息教学环境,构建一个主讲教师、教学资源与真实环境的信息交互回路;通过设置第一视角,听课教室的师生获得与主讲教师相同的视觉体验;
    (5-3)全息互动,通过视觉提示、触觉反馈、语音或音效方式,引导主讲教师关注、触发教学环境中空间定位点,使用手势、视线、语音方式拖动、旋转、缩放眼前全息成像环境中的对象;
    (5-3-1)交互引导,全息成像***可充分利用教室的空间环境,通过视觉提示、触觉反馈、语音或音效方式,引导主讲教师关注、触发全息教学环境中空间定位点,呈现所关联的全息教学资源视频,主讲教师可根据教学流程,与教学资源互动;
    (5-3-2)实时交互,借助全息头显内置的传感器和定位追踪功能,捕捉主讲教师在全息教学空间中的位置和动作信息;主讲教师也能从多角度观看教学资源中虚拟对象的细节,使用手势、视线、语音方式拖动、旋转、缩放全息环境中的虚拟对象;
    (6)教学服务,包括教学资源应用发布、教学行为和过程分析、教学服务管理,教学资源应用发布包括教学资源的发布、推送和空间定位点的更新;通过统计师生课前、课中和课后的教学情况,分析主讲教师的授课风格、学生上课的投入度,获取教学的情感、行为、效果评价数据;实现对整个教学服务模块的统一管理,确保安全性和数据的完整性与一致性;
    (6-1)教学资源应用发布,根据教室端采集到的配置信息,发布适配多终端的教学资源,通过消息推送和热更新机制,向客户端推送更新数据包,记录空间定位点的信息,同步更新到云端;
    (6-1-1)教学资源的发布,依据教师的权限,提供不同教学资源的下载权限,参照主讲教室与听课教室终端的操作***、屏幕分辨率尺寸属性,提供不同分辨率的内容匹配,根据空间定位点的绝对坐标完成资源的多终端适配,确保互动时不发生偏移;
    (6-1-2)教学资源应用的推送,根据后台记录的教室端信息,教学服务模块通过消息推送机制向其发送升级、更新信息,采用热更新方式,将在云端更新的课程内容、教学资源、虚拟场景,通过数据包的形式推送到教室端;
    (6-1-3)空间定位点的同步更新,采用JSON格式记录主讲教师在课程资源中设置、编辑的空间定位点信息,包括ID,三维位置,全息教学场景的元素、状态、位置、姿态、缩放比例参数;将其同步存储到云端,满足专递教室***中不同终端上相同空间位置的共享体验;
    (6-2)教学行为与过程分析,通过统计师生课前、课中和课后的教学情况,分析主讲教师的授课风格;基于BERI模型分析听讲教室学生上课的投入度,获取教学的情感、行为、效果评价数据;
    (6-2-1)教学统计,针对本***师生所处的远程、全息教学环境,课前实时记录教师备课和学生预习的情况,课中实时查看师生操作过程,课后分析学生作业完成情况以及知识掌握牢固点与薄弱点数据,基于S-T模型分析主讲老师的授课风格;
    (6-2-2)学生课堂行为画像,基于BERI模型分析听讲教室学生上课的投入度,获取教学的情感、行为、效果评价数据;综合学生作业成绩、作业完成进度情况,实现对每个学生的精确画像描述;
    (6-2-3)教学活动优化,实时监控和计量教学活动,完成服务节点同步配置、负载均衡配置和资源监控,确保主讲教室的教学活动能顺利同步到各听讲教室;根据各教室端的IP地址,监视、判断各个节点状态,智能化地选择最优节点,提供优质资源和服务;
    (6-3)教学服务管理,实现对整个教学服务***的统一管理,包括教师权限、教学资源、教学活动、***设置、***维护、参数设置、数据备份和恢复,保证教学***的安全性和数据的完整性与一致性;
    (6-3-1)教师权限管理,负责教师用户使用教学服务模块时的登陆、验证、计时、资源编辑、教室创建功能,协助教师用户登录教学服务模块,通过身份验证,记录教师的使用时间,依据权限检索虚拟教学资源,创建或登陆专递教室;
    (6-3-2)教学资源管理,采用“目录式场景树”按照学段>学科>单元>知识点的层次管理虚拟教学资源及其录播教学资源,每个节点都对应一个教学资源,节点可清晰反应其所在的位置和层级关系,清晰的结构便于教师的组织、查询、下载和管理;
    (6-3-3)教学活动管理,在专递教室***中实现多种形态的教学活动灵活组织,支持一校带多点的形式:即支持一个主讲课堂带多个听课教室同步在线上课;或者一校带多校园, 即支持多个主讲课堂带多个听课教室同步在线上课。
PCT/CN2021/131153 2020-12-30 2021-11-17 一种基于全息终端的5g强互动远程专递教学***的工作方法 WO2022142818A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011604676.3 2020-12-30
CN202011604676.3A CN112562433B (zh) 2020-12-30 2020-12-30 一种基于全息终端的5g强互动远程专递教学***的工作方法

Publications (1)

Publication Number Publication Date
WO2022142818A1 true WO2022142818A1 (zh) 2022-07-07

Family

ID=75034332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131153 WO2022142818A1 (zh) 2020-12-30 2021-11-17 一种基于全息终端的5g强互动远程专递教学***的工作方法

Country Status (3)

Country Link
US (1) US11151890B2 (zh)
CN (1) CN112562433B (zh)
WO (1) WO2022142818A1 (zh)

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871164B (zh) * 2019-01-25 2020-10-16 维沃移动通信有限公司 一种消息发送方法及终端设备
US11315326B2 (en) * 2019-10-15 2022-04-26 At&T Intellectual Property I, L.P. Extended reality anchor caching based on viewport prediction
CN111260976B (zh) * 2020-03-20 2021-07-16 上海松鼠课堂人工智能科技有限公司 基于物联网的教学***
CN112562433B (zh) * 2020-12-30 2021-09-07 华中师范大学 一种基于全息终端的5g强互动远程专递教学***的工作方法
CN113095208B (zh) * 2021-04-08 2024-01-26 吉林工商学院 应用于大学英语教学课堂的注意力观测与提醒***
CN113242277B (zh) * 2021-04-19 2021-12-10 华中师范大学 一种5g网络环境下虚拟同步课堂教学***及其工作方法
CN116310023A (zh) * 2021-05-07 2023-06-23 贺之娜 一种基于5g的三维实时云渲染模拟***及模拟方法
US11868523B2 (en) * 2021-07-01 2024-01-09 Google Llc Eye gaze classification
CN113487082B (zh) * 2021-07-06 2022-06-10 华中师范大学 一种虚拟实验教学资源的注记复杂度度量和优化配置方法
CN115909825A (zh) * 2021-08-12 2023-04-04 广州视源电子科技股份有限公司 实现远程教育的***、方法及教学端
CN113724399B (zh) * 2021-09-02 2023-10-27 江西格灵如科科技有限公司 一种基于虚拟现实教学知识点展示方法和***
CN113965609B (zh) * 2021-09-13 2024-04-26 武汉灏存科技有限公司 群控式交互***及方法
CN113821104A (zh) * 2021-09-17 2021-12-21 武汉虹信技术服务有限责任公司 一种基于全息投影的可视化交互***
US11410570B1 (en) 2021-09-27 2022-08-09 Central China Normal University Comprehensive three-dimensional teaching field system and method for operating same
CN113593351B (zh) * 2021-09-27 2021-12-17 华中师范大学 一种立体综合教学场***的工作方法
CN113934635B (zh) * 2021-10-21 2022-07-19 江苏安超云软件有限公司 基于异构处理器提供同等算力云服务的方法及应用
CN114005309A (zh) * 2021-10-28 2022-02-01 北京同思佳创电子技术有限公司 一种利用全息交互设备互动课堂的教学方法
CN114005310A (zh) * 2021-10-28 2022-02-01 北京同思佳创电子技术有限公司 全息交互教学***
CN114007098B (zh) * 2021-11-04 2024-01-30 Oook(北京)教育科技有限责任公司 一种用于智能课堂中3d全息视频的生成方法和装置
CN114125479B (zh) * 2021-11-05 2023-12-19 游艺星际(北京)科技有限公司 信息处理方法、装置、电子设备和存储介质
CN113822777A (zh) * 2021-11-22 2021-12-21 华中师范大学 一种基于5g云渲染的虚拟教学资源聚合***及其工作方法
CN114237389B (zh) * 2021-12-06 2022-12-09 华中师范大学 一种基于全息成像的增强教学环境中临场感生成方法
US11532179B1 (en) * 2022-06-03 2022-12-20 Prof Jim Inc. Systems for and methods of creating a library of facial expressions
CN114237540A (zh) * 2021-12-20 2022-03-25 安徽教育出版社 一种智慧课堂在线教学互动方法、装置、存储介质及终端
CN114327060B (zh) * 2021-12-24 2023-01-31 华中师范大学 一种基于ai助手的虚拟教学***的工作方法
CN114257776B (zh) * 2021-12-27 2023-05-23 上海清鹤科技股份有限公司 教室互动***
CN114360329B (zh) * 2022-01-12 2023-11-10 四川传媒学院 一种用于艺术教育的交互式多功能演播室
CN114554144B (zh) * 2022-01-18 2024-04-26 南京中医药大学 一种基于嵌入式的网络直播视频流硬件化***及方法
CN114499699B (zh) * 2022-01-21 2022-12-02 西安电子科技大学 基于真实数字信号特性的实验结果远程呈现***
CN114283634B (zh) * 2022-01-24 2024-05-24 深圳快学教育发展有限公司 一种基于5g网络环境的虚拟同步课堂教学***
CN114613211A (zh) * 2022-02-06 2022-06-10 北京泽桥医疗科技股份有限公司 一种适用于医学教学场景的3d医疗模型***
CN114327081A (zh) * 2022-02-25 2022-04-12 广东工业大学 一种基于互联网思维下的虚拟仿真的环境设计教学形态
CN114743419B (zh) * 2022-03-04 2024-03-29 国育产教融合教育科技(海南)有限公司 一种基于vr的多人虚拟实验教学***
CN114358988B (zh) * 2022-03-11 2022-06-14 深圳市中文路教育科技有限公司 基于ai技术的教学方式推送方法及装置
CN114582185A (zh) * 2022-03-14 2022-06-03 广州容溢教育科技有限公司 一种基于vr技术的智能教学***
CN115170773A (zh) * 2022-05-24 2022-10-11 上海锡鼎智能科技有限公司 一种基于元宇宙的虚拟课堂动作交互***和方法
CN115190289A (zh) * 2022-05-30 2022-10-14 李鹏 3d全息视屏通信方法、云端服务器、存储介质及电子设备
CN115190261B (zh) * 2022-06-29 2024-06-21 广州市锐星信息科技有限公司 一种用于无线录播***的互动终端
CN115331493A (zh) * 2022-08-08 2022-11-11 深圳市中科网威科技有限公司 一种基于3d全息技术的立体综合教学***及方法
CN115373762B (zh) * 2022-09-20 2024-04-09 江苏赞奇科技股份有限公司 基于ai的实时云渲染中工具资源动态加载方法
CN115515002A (zh) * 2022-09-22 2022-12-23 深圳市木愚科技有限公司 基于虚拟数字人的智能化慕课生成方法、装置及存储介质
CN115933868B (zh) * 2022-10-24 2023-08-04 华中师范大学 翻转讲台的立体综合教学场***及其工作方法
CN115641648B (zh) * 2022-12-26 2023-08-18 苏州飞蝶虚拟现实科技有限公司 基于视觉对重复动作分析过滤的3d远程互动处理***
CN115689833B (zh) * 2022-12-29 2023-03-28 成都华栖云科技有限公司 基于多维感知和普适计算的智慧教学空间模式构建方法
CN116107435A (zh) * 2023-04-11 2023-05-12 深圳飞蝶虚拟现实科技有限公司 基于5g云计算的3d远程互动的动作同步***
CN116433432B (zh) * 2023-04-18 2023-11-21 北京漂洋过海科技有限责任公司 一种大数据的智慧校园管理***
CN117590929A (zh) * 2023-06-05 2024-02-23 北京虹宇科技有限公司 一种三维场景的环境管理方法、装置、设备及存储介质
CN116630486B (zh) * 2023-07-19 2023-11-07 山东锋士信息技术有限公司 一种基于Unity3D渲染的半自动化动画制作方法
CN116958353B (zh) * 2023-07-27 2024-05-24 深圳优立全息科技有限公司 一种基于动态捕捉的全息投影方法及相关装置
CN116758201B (zh) * 2023-08-16 2024-01-12 淘宝(中国)软件有限公司 三维场景的渲染处理方法、设备、***及计算机存储介质
CN116862730B (zh) * 2023-09-05 2023-11-21 山东劳动职业技术学院(山东劳动技师学院) 一种vr全息教学管理***
CN117176775B (zh) * 2023-11-02 2023-12-29 上海银行股份有限公司 一种基于远程服务的银行数据处理方法及***
CN117215416B (zh) * 2023-11-08 2024-05-07 北京烽火万家科技有限公司 移动终端全息交流方法、装置、计算机设备和存储介质
CN117420868B (zh) * 2023-12-18 2024-04-09 山东海智星智能科技有限公司 基于物联网的智慧教室控制***及方法
CN117575864B (zh) * 2024-01-16 2024-04-30 山东诚海电子科技有限公司 一种基于智慧校园的设施管理方法、设备及介质
CN118038722B (zh) * 2024-04-11 2024-06-25 南京南工智华智能技术有限公司 基于虚拟现实的课堂实景再现交互式教学***和方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170221267A1 (en) * 2016-01-29 2017-08-03 Tata Consultancy Services Limited Virtual reality based interactive learning
CN107479705A (zh) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 一种基于HoloLens的指挥所协同作业电子沙盘***
CN109035932A (zh) * 2018-08-21 2018-12-18 合肥创旗信息科技有限公司 一种vr全息教学***
CN109118855A (zh) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 一种巨屏全息还原真实场景的网络教学***
US20190080097A1 (en) * 2017-09-14 2019-03-14 International Business Machines Corporation Methods and systems for rendering holographic content
CN109564706A (zh) * 2016-12-01 2019-04-02 英特吉姆股份有限公司 基于智能交互式增强现实的用户交互平台
CN210091423U (zh) * 2019-12-30 2020-02-18 杭州赛鲁班网络科技有限公司 一种基于全息投影的远程教学互动***
CN110969905A (zh) * 2019-11-29 2020-04-07 塔普翊海(上海)智能科技有限公司 混合现实的远程教学互动、教具互动***及其互动方法
US20200142354A1 (en) * 2018-11-01 2020-05-07 International Business Machines Corporation Holographic image replication
CN112562433A (zh) * 2020-12-30 2021-03-26 华中师范大学 一种基于全息终端的5g强互动远程专递教学***及其工作方法

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5495576A (en) 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
JP2003202799A (ja) * 2002-01-09 2003-07-18 Kyoiku Kikaku:Kk 通信ネットワークを利用した学習システム
TW200919210A (en) * 2007-07-18 2009-05-01 Steven Kays Adaptive electronic design
US10360808B2 (en) * 2013-08-30 2019-07-23 Amrita Vishwa Vidyapeetham System and method for synthesizing and preserving consistent relative neighborhood position in multi-perspective multi-point tele-immersive environments
US10360729B2 (en) * 2015-04-06 2019-07-23 Scope Technologies Us Inc. Methods and apparatus for augmented reality applications
CN204965778U (zh) * 2015-09-18 2016-01-13 华中师范大学 一种基于虚拟现实与视觉定位的幼儿教学***
WO2018044230A1 (en) * 2016-09-02 2018-03-08 Tan Meng Wee Robotic training apparatus and system
WO2018187748A1 (en) * 2017-04-07 2018-10-11 Unveil, LLC Systems and methods for mixed reality medical training
CN109118854A (zh) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 一种全景沉浸式直播互动教学***
CN107680165B (zh) 2017-09-25 2021-01-26 中国电子科技集团公司第二十八研究所 基于HoloLens的电脑操作台全息展现与自然交互应用方法
CN108172045A (zh) * 2018-03-02 2018-06-15 安徽时间分享信息科技有限公司 一种基于ar的远程教育视频通讯设备及***
CA3097897A1 (en) * 2018-04-30 2019-11-07 Breakthrough Performancetech, Llc Interactive application adapted for use by multiple users via a distributed computer-based system
CN110478892A (zh) 2018-05-14 2019-11-22 彼乐智慧科技(北京)有限公司 一种三维交互的方法及***
US10928775B2 (en) * 2018-07-17 2021-02-23 International Business Machines Corporation 3D holographic display and holographic object formation
CN109035915A (zh) 2018-08-21 2018-12-18 合肥创旗信息科技有限公司 一种vr全息教学管理***
CN109035887B (zh) * 2018-08-23 2020-11-03 重庆贝锦科技有限公司 基于5g网络进行传输与控制的全息投影教学装置
WO2020068132A1 (en) * 2018-09-28 2020-04-02 Yang Shao Wen Interactive environments using visual computing and immersive reality
CN109598989A (zh) * 2018-12-05 2019-04-09 沈阳惠诚信息技术有限公司 一种基于vr技术的电力通信故障检修虚拟交互***
CN109800663A (zh) * 2018-12-28 2019-05-24 华中科技大学鄂州工业技术研究院 基于语音和视频特征的教师教学评估方法及设备
CN109615961A (zh) * 2019-01-31 2019-04-12 华中师范大学 一种课堂教学师生互动网络***与方法
CN110013526A (zh) * 2019-05-13 2019-07-16 长沙市索菲亚创客健康管理有限公司 一种改善睡眠的按摩精油及其使用方法
KR102217783B1 (ko) * 2019-11-05 2021-02-19 한양대학교 산학협력단 학습자-교수자간 상호작용 극대화를 위한 5g 텔레프레즌스 기반 hy-live(하이-라이브) 교육 시스템
CN110971678B (zh) * 2019-11-21 2022-08-12 深圳职业技术学院 一种基于5g网络的沉浸式可视化校园***
CN110956863A (zh) * 2020-02-22 2020-04-03 上海墩庐生物医学科技有限公司 一种利用3d全息投影进行戏曲互动教学的方法
CN111372282B (zh) * 2020-03-05 2023-04-07 中国联合网络通信集团有限公司 基于5g技术的智慧校园***
CN111352243A (zh) * 2020-03-31 2020-06-30 北京塞傲时代信息技术有限公司 一种基于5g网络的ar远程渲染***及方法
CN111627271A (zh) * 2020-04-10 2020-09-04 北京文香信息技术有限公司 一种同步课堂***及其管理平台
CN111862711A (zh) * 2020-06-19 2020-10-30 广州光建通信技术有限公司 一种基于5g物联虚拟现实娱乐休闲学习装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170221267A1 (en) * 2016-01-29 2017-08-03 Tata Consultancy Services Limited Virtual reality based interactive learning
CN109564706A (zh) * 2016-12-01 2019-04-02 英特吉姆股份有限公司 基于智能交互式增强现实的用户交互平台
CN109118855A (zh) * 2017-06-22 2019-01-01 格局商学教育科技(深圳)有限公司 一种巨屏全息还原真实场景的网络教学***
CN107479705A (zh) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 一种基于HoloLens的指挥所协同作业电子沙盘***
US20190080097A1 (en) * 2017-09-14 2019-03-14 International Business Machines Corporation Methods and systems for rendering holographic content
CN109035932A (zh) * 2018-08-21 2018-12-18 合肥创旗信息科技有限公司 一种vr全息教学***
US20200142354A1 (en) * 2018-11-01 2020-05-07 International Business Machines Corporation Holographic image replication
CN110969905A (zh) * 2019-11-29 2020-04-07 塔普翊海(上海)智能科技有限公司 混合现实的远程教学互动、教具互动***及其互动方法
CN210091423U (zh) * 2019-12-30 2020-02-18 杭州赛鲁班网络科技有限公司 一种基于全息投影的远程教学互动***
CN112562433A (zh) * 2020-12-30 2021-03-26 华中师范大学 一种基于全息终端的5g强互动远程专递教学***及其工作方法

Also Published As

Publication number Publication date
US20210225186A1 (en) 2021-07-22
US11151890B2 (en) 2021-10-19
CN112562433B (zh) 2021-09-07
CN112562433A (zh) 2021-03-26

Similar Documents

Publication Publication Date Title
WO2022142818A1 (zh) 一种基于全息终端的5g强互动远程专递教学***的工作方法
CN106846940A (zh) 一种在线直播课堂教育的实现方法
CN106792246A (zh) 一种融合式虚拟场景互动的方法及***
CN109118854A (zh) 一种全景沉浸式直播互动教学***
CN106789991A (zh) 一种基于虚拟场景的多人互动方法及***
KR20160021146A (ko) 가상 동영상 통화 방법 및 단말
CN105376547A (zh) 一种基于3d虚拟合成技术的微课录制***及方法
CN105052154A (zh) 生成具有多个视点的视频
WO2023011221A1 (zh) 混合变形值的输出方法及存储介质、电子装置
CN114638732A (zh) 一种人工智能智慧教育平台及其应用
CN108305308A (zh) 虚拟形象的线下展演***及方法
CN104469304A (zh) 用于表演训练的智能录播***
CN112492231A (zh) 远程交互方法、装置、电子设备和计算机可读存储介质
KR20210081082A (ko) 객체의 모션 데이터 기반의 아바타 컨텐츠를 제공하는 서버, 방법 및 사용자 단말
CN110433491A (zh) 虚拟观众的动作同步响应方法、***、装置和存储介质
Hu et al. FSVVD: A dataset of full scene volumetric video
CN108320331B (zh) 一种生成用户场景的增强现实视频信息的方法与设备
CN205901924U (zh) 音视频二合一采集设备
Chunwijitra et al. Advanced content authoring and viewing tools using aggregated video and slide synchronization by key marking for web-based e-learning system in higher education
CN116863043A (zh) 面部动态捕捉驱动方法、装置、电子设备及可读存储介质
Sun et al. Video Conference System in Mixed Reality Using a Hololens
CN109255996A (zh) 一种在线课堂的播放优化方法及***
Takacs et al. Hyper 360—towards a unified tool set supporting next generation VR film and TV productions
CN206640699U (zh) 录播设备
Hongyi et al. The conversion of the production mode of film green screen visual effects in the setting of 5G technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913561

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21913561

Country of ref document: EP

Kind code of ref document: A1