CN114531604B - Intelligent processing method and system for online teaching video - Google Patents

Intelligent processing method and system for online teaching video Download PDF

Info

Publication number
CN114531604B
CN114531604B CN202210142089.XA CN202210142089A CN114531604B CN 114531604 B CN114531604 B CN 114531604B CN 202210142089 A CN202210142089 A CN 202210142089A CN 114531604 B CN114531604 B CN 114531604B
Authority
CN
China
Prior art keywords
video
course
data
acquiring
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210142089.XA
Other languages
Chinese (zh)
Other versions
CN114531604A (en
Inventor
张楚婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jiabang Information Technology Co ltd
Original Assignee
Guangzhou Jiabang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jiabang Information Technology Co ltd filed Critical Guangzhou Jiabang Information Technology Co ltd
Priority to CN202210142089.XA priority Critical patent/CN114531604B/en
Publication of CN114531604A publication Critical patent/CN114531604A/en
Application granted granted Critical
Publication of CN114531604B publication Critical patent/CN114531604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent processing method and system for online teaching videos, wherein the method comprises the following steps: acquiring a student identity verification code, acquiring a target course code according to the identity verification code, and calling a corresponding target video course according to the target course code based on a preset cloud course library; performing video frame type analysis according to the target video course, determining the video frame type of each frame, and performing lossless compression on the target video course according to the video frame type of each frame; the student end carries out video reconstruction according to lossless compression data, acquires the video course of rebuilding, and aims at the process of looking over of video course of rebuilding carries out state monitoring, acquires the state testing result, and the system includes: the system comprises a course verification module, a video compression module and a state monitoring module. The invention can improve the resolution of the online teaching video and reduce the possibility of time delay of video data in the transmission process.

Description

Intelligent processing method and system for online teaching video
Technical Field
The invention relates to the technical field of video image processing, in particular to an intelligent processing method and system for an online teaching video.
Background
At present, the rapid development of the internet promotes the enthusiasm of Online teaching, and for Online teaching videos, the problems that Video pictures are not synchronous due to network delay, the resolution of teaching videos obtained by student ends is low, and the lecture listening experience is influenced are generally caused, in addition, different from the traditional lecture listening mode, when a difficult knowledge point appears in the course of lecture listening or different problems occur to students, a teacher end cannot provide key explanation, so that the teaching effect cannot be reflected timely and accurately, and the Online teaching efficiency is influenced, in order to improve the speed of Video transmission, compression Processing needs to be performed on the teaching videos through a server end, data loss is easily caused by compressed data, when the compressed data is reconstructed again, the resolution of images is reduced, so that the images generate noise, quantization noise and important information of quantization errors are lost, a great amount of data are easily obtained through Online analysis and classification Processing by a super activity provider classifier 8978 (2953), and a great amount of data is easily lost in the process of Online teaching videos, and a great amount of data is easily obtained through a process of Online learning and a classification Processing result is easily obtained through a 2953 classification process.
Disclosure of Invention
The invention provides an intelligent processing method and system of an online teaching video, which are used for solving the problems of image quality damage and data transmission delay in the online teaching video processing process.
As an embodiment of the present invention: an intelligent processing method of online teaching videos comprises the following steps:
acquiring a student identity verification code, acquiring a target course code according to the identity verification code, and calling a corresponding target video course according to the target course code based on a preset cloud course library;
performing video frame type analysis according to the target video course, determining the video frame type of each frame, and performing lossless compression processing on the target video course according to the video frame type of each frame;
and the student side carries out video reconstruction according to the lossless compression data to obtain reconstructed video courses, and carries out state monitoring aiming at the viewing process of the reconstructed video courses to obtain state monitoring results.
As an embodiment of the invention: the method comprises the steps of acquiring a student identity verification code, acquiring a target course code according to the identity verification code, calling a corresponding target video course according to the target course code based on a preset cloud course library, and comprises the following steps:
acquiring student side login information, and acquiring course subscription information of a client based on a preset client subscription database according to the login information; wherein, the first and the second end of the pipe are connected with each other,
the course subscription information comprises: subscribed course codes, subscribed course names, course hours, course time and course teachers;
automatically generating a course selection list according to the course subscription information of the client, and acquiring a target course selection item of the client according to the course selection list;
and acquiring a corresponding course selection link according to the target course selection item of the client, searching a target course based on the cloud course library according to the course selection link, and determining a target video course.
As an embodiment of the invention: the analyzing the video frame type according to the target video course, determining the video frame type of each frame, and performing lossless compression processing on the target video course according to the video frame type of each frame, includes:
performing video segmentation according to the target video course to obtain a plurality of frames of image data, and performing down-sampling processing on each frame of image data to obtain primary image data;
acquiring reconstructed boundary pixels according to the number of the primary images, performing intra-frame prediction according to the reconstructed boundary pixels, determining spatial redundancy information in the primary image data, performing inter-frame prediction based on the spatial redundancy information, determining temporal redundancy information in the primary image data, and acquiring secondary image data according to the spatial redundancy information and the temporal redundancy information;
performing discrete cosine transform on the secondary image data, determining a signal concentration area corresponding to the secondary image data, and performing secondary discrete cosine transform according to the signal concentration area to determine primary compressed image data;
performing quantization processing on the first-level compressed image data, determining unnecessary data, discarding the unnecessary data, and determining second-level compressed data; wherein the non-essential data comprises: repeating data and floating point data;
and carrying out data processing on the second-level compressed data through entropy coding to obtain third-level compressed data.
As an embodiment of the present invention: the student terminal carries out video reconstruction according to the lossless compression data, acquires the reconstructed video course, carries out state monitoring aiming at the viewing process of the reconstructed video course, acquires a state monitoring result, and comprises:
performing video segmentation on the lossless compressed data to obtain each frame of compressed image data;
acquiring adjacent frame compressed image data according to each frame of compressed image data, and performing motion tracking according to pixel points in the adjacent frame compressed image data to acquire corresponding pixel point motion tracks;
establishing a model aiming at the motion trail of the pixel points corresponding to the compressed image data to generate a video reconstruction model;
performing motion compensation calculation according to the video reconstruction model to obtain a motion compensation error, correcting the motion compensation error, and performing update processing on the corrected video reconstruction model, wherein the update processing is to perform correction update on the video reconstruction model;
according to the updated video reconstruction model, carrying out reconstruction processing on the lossless compression data to obtain a reconstructed video course;
and based on the reconstructed video course, monitoring the checking process of the student side in real time, and recording monitoring data.
As an embodiment of the present invention: the student end carries out video reconstruction according to lossless compression data, acquires the video course of rebuilding, and is directed against the process of looking over of rebuilding video course carries out state monitoring, acquires the state monitoring result, still includes:
performing noise analysis according to the lossless compressed data, acquiring quantization noise data corresponding to the lossless compressed data, distinguishing the quantization noise data, and confirming a distinguishing result; wherein the quantization noise can be divided into: transform domain quantization noise, spatial domain quantization noise;
when the distinguishing result of the quantization noise is space domain quantization noise, converting the space domain quantization noise to form transform domain quantization noise;
carrying out noise separation on the transform domain quantization noise according to a preset frequency level, and determining a frequency noise separation result; wherein the preset frequency level comprises: high frequency, medium frequency, low frequency;
and converting the transform domain quantization noise into the original space domain quantization noise through inverse conversion for denoising based on the noise separation result.
As an embodiment of the present invention: the student end carries out video reconstruction according to lossless compression data, acquires the video course of rebuilding, and aims at the process of looking over of rebuilding video course carries out state monitoring, acquires the state monitoring result, still includes:
acquiring state monitoring data, and sending the state monitoring data to a preset big data processing platform to acquire abnormal monitoring data;
extracting fault characteristics aiming at the abnormal monitoring data to obtain fault image characteristics, classifying according to the fault image characteristics to obtain classified fault characteristics; wherein, the trouble includes: network connection failure and video reconstruction failure; the video reconstruction failure comprises: video frame damage and video and audio-picture asynchronism;
and extracting and learning the fault image characteristics by adopting a convolutional neural network according to the classified fault characteristics, and determining the video fault reason.
As an embodiment of the present invention: the method for extracting and learning the fault image features by adopting a convolutional neural network according to the classified fault features to determine the video fault reason further comprises the following steps:
when the video fault reason shows that the course video is a deformation problem, carrying out edge monitoring on the course video data to obtain boundary information of corresponding display equipment;
analyzing the spatial consistency of each frame of data of the course video to obtain an analysis result;
when the analysis result shows that the space consistency meets the requirement, matching a starting frame and an ending frame of the course video by adopting feature transformation to obtain a first processing video;
and linearly aligning the first processing video and the course video according to time and space to obtain a second processing video.
As an embodiment of the present invention: an intelligent processing method for online teaching video, further comprising:
acquiring pause frequency of a student end in a video playing process according to the state monitoring result of the reconstructed video course, carrying out pause point positioning on the reconstructed video based on the pause frequency, and acquiring a primary knowledge point sequence corresponding to a pause point according to the pause point positioning result;
according to the state monitoring result of the reconstructed video course, acquiring playback frequency of a student end in a video playing process, carrying out playback point positioning on the reconstructed video based on the playback frequency, and acquiring a secondary knowledge point sequence corresponding to the playback point according to the conference discharge positioning result;
acquiring post-class comments of the reconstructed video courses, performing frequency analysis on knowledge point keywords in the post-class comments, and acquiring a three-level knowledge point sequence based on frequency analysis results of the knowledge point keywords;
acquiring primary knowledge point sequences and secondary knowledge point sequences corresponding to all student terminals based on a preset cloud platform;
performing frequency analysis on the primary knowledge point sequence, the secondary knowledge point sequence and the tertiary knowledge point sequence corresponding to all student terminals to obtain frequency analysis results of the knowledge point sequences, and performing grade division on difficulty grades of the knowledge points according to the knowledge point frequency analysis results; wherein the difficulty level of the knowledge point comprises: primary difficulty knowledge points, secondary difficulty knowledge points and tertiary difficulty knowledge points;
and dynamically setting a corresponding difficulty review plan according to the difficulty grade division result, and synchronizing the difficulty review plan to a corresponding teacher port.
As an embodiment of the present invention: an intelligent processing method for online teaching video, further comprising:
collecting lecture video data of a lecture teacher in real time through a binocular vision camera, and carrying out video frame splicing on the video data of multiple angles to obtain a panoramic lecture video;
obtaining left and right eye parallax data by depth calculation aiming at the panoramic lecture video;
acquiring panoramic video data in a three-dimensional scene through VR display equipment based on the left and right eye parallax data;
performing delay measurement according to panoramic video data in the three-dimensional scene to obtain delay time, and judging that the delay time exceeds the standard when the delay time is larger than a preset time range;
and when the delay time exceeds the standard, starting delay control for the encoder.
As an embodiment of the present invention: an intelligent processing system for online teaching video, comprising:
a course verification module: the system comprises a cloud course library, a client end identity verification code and a target video course, wherein the cloud course library is used for acquiring student end identity verification codes, acquiring target course codes according to the identity verification codes, and calling corresponding target video courses according to the target course codes on the basis of the cloud course library;
a video compression module: the video frame type analysis is carried out according to the target video course, the video frame type of each frame is determined, and lossless compression is carried out on the target video course according to the video frame type of each frame;
a state detection module: the device is used for carrying out video reconstruction on the lossless compression data, acquiring reconstructed video courses, and carrying out state monitoring on the reconstructed video courses to acquire state detection results.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic flow chart illustrating an intelligent processing method for online teaching video according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a lossless compression flow in an intelligent processing system for online teaching video according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an intelligent processing system for online teaching video according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions, and "plurality" means two or more unless specifically limited otherwise. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Example 1:
the embodiment of the invention provides an intelligent processing method of an online teaching video, which comprises the following steps as shown in the attached figure 1:
acquiring a student identity verification code, acquiring a target course code according to the identity verification code, and calling a corresponding target video course according to the target course code based on a preset cloud course library;
performing video frame type analysis according to the target video course, determining the video frame type of each frame, and performing lossless compression processing on the target video course according to the video frame type of each frame;
the student side carries out video reconstruction according to the lossless compression data to obtain reconstructed video courses, and carries out state monitoring on the viewing process of the reconstructed video courses to obtain state monitoring results;
in one practical scenario: when online education is carried out, a teacher end sends all online education videos to a server end, when a student end needs to search for a corresponding video course, the student end sends a course service request, the server end analyzes according to the received service request, and sends a target video course to a student end for the student to carry out online learning, data are directly transmitted in such a mode, so that video data transmission delay is easily caused, when the video data volume is large, data transmission is slow, data loss is easily caused, and image quality damage and resolution reduction are easily caused by directly transmitting the data;
when the method is implemented, firstly, a target course code is directly obtained according to login information of a student end, then a target video course is obtained according to click link of the student end, firstly, the video course is subjected to lossless compression in the transmission process of the target video course, data subjected to lossless compression are sent to the student end through a server, the student end carries out video reconstruction after receiving the lossless compression data, when the student end views the reconstructed video course, the viewing state is monitored in real time, and the monitored data are recorded;
the beneficial effects of the above technical scheme are: according to the method, the target video course is acquired according to the login information of the student end, the teaching video is directly acquired through the login information of the student end, the judgment time of the system for different courses purchased by different students is reduced, the target video course is uploaded to the server in a lossless compression mode, the data after lossless compression is directly downloaded by the student end, the memory of the data is reduced, the data transmission time is reduced, the data transmission efficiency is improved, the state of the video reconstruction course is monitored by looking over the student end, the important and difficult points of the students for the online teaching video can be rapidly acquired, and the teacher end can take medicines according to symptoms.
Example 2:
in one embodiment, the obtaining the student-side authentication code, obtaining the target course code according to the authentication code, and calling the corresponding target video course according to the target course code based on a preset cloud course library includes:
acquiring student terminal login information, and acquiring course subscription information of a client based on a preset client subscription database according to the login information; wherein, the first and the second end of the pipe are connected with each other,
the course subscription information includes: subscribed course codes, subscribed course names, course hours, course time and course teachers;
automatically generating a course selection list according to the course subscription information of the client, and acquiring a target course selection item of the client according to the course selection list;
acquiring a corresponding course selection link according to the target course selection item of the client, searching a target course based on the cloud course library according to the course selection link, and determining a target video course;
in one practical scenario: when the student end logs in through the account number and the password, the relevant information of the subscribed course can be checked, then the subscribed course is selected, the server end receives the service request sent by the student end, the selected course is searched in the server, the searched video data is directly sent to the student end, and the student end can check the course;
when the method is implemented, a server side acquires a subscription course list through login information of a student side, wherein the list comprises information such as a subscribed course code, a subscribed course name, a class hour, a teaching time, a teaching teacher and the like, and then a target video course is determined according to a link selected by the student side;
the beneficial effects of the above technical scheme are: according to the invention, the course subscription information of all students is stored in the cloud database, so that the storage efficiency of the server is favorably improved, the data is not easy to lose, and the integrity of the data is ensured.
Example 3:
in one embodiment, the performing video frame type analysis according to the target video lesson, determining a video frame type of each frame, and performing lossless compression processing on the target video lesson according to the video frame type of each frame, as shown in fig. 2, includes:
performing video segmentation according to the target video course to obtain a plurality of frames of image data, and performing down-sampling processing on each frame of image data to obtain primary image data;
acquiring reconstructed boundary pixels according to the number of the primary images, performing intra-frame prediction according to the reconstructed boundary pixels, determining spatial redundancy information in the primary image data, performing inter-frame prediction based on the spatial redundancy information, determining temporal redundancy information in the primary image data, and acquiring secondary image data according to the spatial redundancy information and the temporal redundancy information;
performing discrete cosine transform on the secondary image data, determining a signal concentration area corresponding to the secondary image data, and performing secondary discrete cosine transform according to the signal concentration area to determine primary compressed image data;
performing quantization processing on the primary compressed image data, determining unnecessary data, discarding the unnecessary data, and determining secondary compressed data; wherein the non-essential data comprises: repeating data and floating point data;
performing data processing on the second-level compressed data through entropy coding to obtain third-level compressed data;
in one practical scenario: according to the service request sent by the student end, the server end receives the corresponding service request, searches for the target course video required by the student end in the server, and directly sends the complete target course data to the student local end when searching for the corresponding target video data;
when the method is implemented, the teacher end performs lossless compression and uploading on the course data to the server end, and the lossless steps comprise: firstly, forming a plurality of frames of image data by a video segmentation mode aiming at a complete target course video, then performing down-sampling processing on each frame of image data, then performing intra-frame or inter-frame prediction on the data subjected to down-sampling processing to estimate a mode capable of saving a code stream most, then performing discrete cosine transform on the predicted video data to determine the frame type of a current frame, then performing quantization processing on the video data, and when the image data is quantized, firstly performing image data analysis on primary compressed image data to obtain image data parameters, then performing model analysis on the image data parameters to determine quantization noise in the image data, then removing the quantization noise, and finally performing lossless compression processing through entropy coding;
the beneficial effects of the above technical scheme are: the video data is subjected to video segmentation processing, so that the method is favorable for rapidly processing image data of one frame, and the data is subjected to intra-frame and inter-frame prediction, so that a mode of saving the most code stream is obtained, the efficiency of data compression is improved, the compression time is reduced, quantization noise is removed by performing quantization processing on the data, the characteristics of the video data are favorably maintained, the compression loss is reduced, and the data resolution is ensured to be maintained when the data is compressed.
Example 4:
in one embodiment, the method for acquiring the state monitoring of the review process of the reconstructed video course includes that the student side performs video reconstruction according to the lossless compression data to acquire the reconstructed video course and performs state monitoring on the review process of the reconstructed video course to acquire a state monitoring result, and includes:
performing video segmentation on the lossless compressed data to obtain each frame of compressed image data;
acquiring adjacent frame compressed image data according to each frame of compressed image data, and performing motion tracking according to pixel points in the adjacent frame compressed image data to acquire corresponding pixel point motion tracks;
establishing a model aiming at the motion trail of the pixel points corresponding to the compressed image data to generate a video reconstruction model;
performing motion compensation calculation according to the video reconstruction model to obtain a motion compensation error, correcting the motion compensation error, and performing update processing on the corrected video reconstruction model, wherein the update processing is to perform correction update on the video reconstruction model;
according to the updated video reconstruction model, carrying out reconstruction processing on the lossless compression data to obtain a reconstructed video course;
monitoring the checking process of the student side in real time based on the reconstructed video course, and recording monitoring data;
in one practical scenario: when video reconstruction is performed on compressed data, generally, in the prior art, decompression processing is directly performed on the compressed data, the time consumption of data obtained in a direct decompression mode is long, data is easy to lose, and data is incomplete;
when the method is implemented, when video reconstruction is carried out on compressed data, the compressed data is not directly decompressed, pixel points in each frame of compressed image data are obtained, the pixel points are tracked according to a frame sequence, a video reconstruction model is constructed according to the tracking track of the pixel points, and at the moment, due to the fact that motion compensation errors exist in the video reconstruction model, the data are easily damaged when the video reconstruction is directly carried out according to the model;
in a specific embodiment, in the process of constructing the video reconstruction model, in order to improve the video resolution in the process of video reconstruction, the frame compression image data is pre-trained, the weight is calculated, the parameters in the video reconstruction model are updated by using the weight coefficient, so that the accuracy of the video reconstruction model is higher,
step 1: suppose an image input frame is four adjacent data frames, corresponding to: f. of 1 、f 2 、f 3 、f 4 When pre-training is performed on an input data frame, the pre-training calculation process is as follows:
Figure BDA0003507476350000141
wherein, I (f) 1 ,f 2 ,f 3 ,f 4 ) Representing the result of the pre-training calculation of the adjacent frames, gamma (p, q, u, c) representing the weight of the filter, (p, q, u, c) being the weight of each input frame in the filter, l (d) being the error value generated during the pre-training process, d being the number of layers during the pre-training process, sigma t The input data frame corresponding to the time t is shown, P, Q respectively shows the distribution range of the weight value, the weight of the input data frame is P × Q × 1 × C, the dimension of the input data of the third frame is 1, and therefore the weight is 1;
and 2, step: according to the result of the pre-training, calculating the motion compensation of two adjacent frames so as to improve the resolution of the reconstructed video, wherein the formula for calculating the motion compensation is as follows:
μ 1 (f 1 ,f 2 )=(1-s(f 1 ,f 2 ))·m(f 1 ,f 2 )+s(f 1 ,f 2 )·μ 0 (f 1 ,f 2 )
wherein, s (f) 1 ,f 2 ) Representing adjacent frames f 1 ,f 2 Correlation degree between corresponding pixel points, m (f) 1 ,f 2 ) Representing adjacent frames f 1 ,f 2 Corresponding to the center pixel, μ 0 (f 1 ,f 2 ) Representing motion compensated adjacent frames, mu 1 (f 1 ,f 2 ) Representing the video frame data after motion compensation corresponding to the adjacent frames;
and step 3: s (f) 1 ,f 2 ) The calculation formula of (a) is as follows:
Figure BDA0003507476350000142
wherein λ represents a constant parameter, ξ (f) 1 ,f 2 ) Indicating a mismatch error in motion compensation for adjacent frame data, s (f) 1 ,f 2 ) Representing adjacent frames f 1 ,f 2 The degree of association between corresponding pixel points;
the beneficial effects of the above technical scheme are: according to the method, the motion trail tracking is carried out on the pixel points in the image data according to the frame sequence, the next step trend of visually and quickly acquiring the video data is facilitated, the accuracy of video reconstruction is improved, the video reconstruction model is established according to the motion trail, repeated work is avoided when reconstruction is carried out on compressed data in the later period, the video reconstruction is directly carried out according to the model, in addition, calculation and correction are carried out according to motion compensation errors, the video reconstruction model is favorably and continuously optimized, and the reliability of the video reconstruction is ensured.
Example 5:
in one embodiment, the student side performs video reconstruction according to the lossless compression data to obtain a reconstructed video course, and aiming at the viewing process of the reconstructed video course, carrying out state monitoring to obtain a state monitoring result, and further comprising the following steps:
performing noise analysis according to the lossless compression data, acquiring quantization noise data corresponding to the lossless compression data, distinguishing the quantization noise data, and confirming a distinguishing result; wherein the quantization noise can be divided into: transform domain quantization noise, spatial domain quantization noise;
when the distinguishing result of the quantization noise is space domain quantization noise, converting the space domain quantization noise to form transform domain quantization noise;
carrying out noise separation on the transform domain quantization noise according to a preset frequency level, and determining a frequency noise separation result; wherein the preset frequency level comprises: high frequency, medium frequency, low frequency;
based on the noise separation result, converting the transform domain quantization noise into original space domain quantization noise through inverse conversion for denoising;
in one practical scenario: when video reconstruction is carried out on compressed data, generally, in the prior art, the compressed data are directly decompressed, the time consumption of the data obtained in a direct decompression mode is long, the data are easy to lose, and the data are incomplete
When the invention is implemented, quantization noise is easy to generate in the process of video reconstruction aiming at lossless compression data, if the quantization noise is not processed, the resolution of the reconstructed video data is reduced, and the data is lost, therefore, in the invention, the quantization noise is firstly obtained, and then the quantization noise is classified into: transforming domain quantization noise and space domain quantization noise, and carrying out noise classification on quantization noise so as to realize a denoising process aiming at the quantization noise;
the beneficial effects of the above technical scheme are: according to the invention, the denoising processing is carried out on the quantization noise in the video reconstruction process, so that the efficiency and the accuracy of video reconstruction are favorably improved, and the noise separation is carried out on the quantization noise in a transform domain according to high, medium and low frequencies, so that the denoising efficiency is favorably improved, the image quality of the video is prevented from being damaged in the reconstruction process, and the front and back integrity of the video data is ensured.
Example 6:
in one embodiment, the student side performs video reconstruction according to the lossless compression data to obtain a reconstructed video course, and aiming at the viewing process of the reconstructed video course, carrying out state monitoring to obtain a state monitoring result, and further comprising the following steps:
acquiring state monitoring data, and sending the state monitoring data to a preset big data processing platform to acquire abnormal monitoring data;
extracting fault features aiming at the abnormal monitoring data to obtain fault image features, classifying according to the fault image features to obtain classified fault features; wherein the fault comprises: network connection failure and video reconstruction failure; the video reconstruction failure comprises: video frame damage and video and audio-picture asynchronism;
extracting and learning the fault image characteristics by adopting a convolutional neural network according to the classified fault characteristics to determine the video fault reason;
in one practical scenario: when the student checks the teaching process and monitors the state, the problems generally include server errors and local network delay, so that the specific reason of the video fault cannot be determined when other fault problems occur;
when the method is implemented, the checking process of the student end is monitored in real time, the monitored data is recorded, the monitored data is uploaded to a big data processing platform, abnormal monitoring data is obtained, feature analysis is carried out according to the abnormal monitoring data, the feature of a fault is obtained, and finally the corresponding fault reason is determined in the big data processing platform according to the feature;
in a specific embodiment, the fault image features are obtained and need to be classified according to the fault image features, wherein the fault classification process includes:
step 1: when the convolutional neural network is adopted to classify faults, firstly, an activation function needs to be selected:
Figure BDA0003507476350000171
the method comprises the following steps that a represents an input item, and when function selection is activated, the characteristics that a ReLU function is 0 for a negative part and the rest part is the same as the input item are considered, so that the method has better efficiency when feature extraction is carried out;
step 2: the fault classification function corresponds to:
Figure BDA0003507476350000172
wherein m () represents a cost function and n () represents a pixel point gradient function, a represents an input item, and J represents classification for video failure by combining the cost function and the gradient function.
The beneficial effects of the above technical scheme are: the method and the device have the advantages that the teaching video viewing state of the student end is monitored in real time, the student end can be monitored in the teaching video viewing process, abnormal data can be obtained in time, the fault characteristics and reasons can be determined through a big data processing platform and a machine learning method, and the reliability of a fault analysis result can be improved.
Example 7:
in one embodiment, the extracting and learning the fault image features by using a convolutional neural network according to the classified fault features to determine a video fault cause further includes:
when the curriculum video is displayed as a deformation problem due to the video fault, performing edge monitoring on curriculum video data to acquire boundary information of corresponding display equipment;
analyzing the spatial consistency of each frame of data of the course video to obtain an analysis result;
when the analysis result shows that the space consistency meets the requirement, matching a starting frame and an ending frame of the course video by adopting feature transformation to obtain a first processing video;
linearly aligning the first processing video and the course video according to time and space to obtain a second processing video;
in one practical scenario: when the state monitoring is carried out in the process of checking teaching at the student end, the problems generally comprise server errors and local end network delay, so that the specific reason of the video fault cannot be determined when other fault problems occur
When the method is implemented, a big data processing platform is introduced in the process of checking the online teaching video, and fault characteristic analysis is performed on abnormal data by combining a machine learning method, so that the fault reason is finally determined, wherein the characteristic analysis process mainly comprises the following steps: the method comprises the following steps of performing space consistency analysis and linear alignment of video playing time and space, and obtaining corresponding characteristic analysis results through the two processing methods;
the beneficial effects of the above technical scheme are: the invention is beneficial to removing redundancy generated on a time domain through the analysis of space consistency, and is beneficial to reducing the complexity of an algorithm through linearly aligning the played video from time and space, thereby improving the efficiency and the accuracy of fault analysis.
Example 8:
in one embodiment, a method for intelligent processing of online teaching video, further comprises:
acquiring pause frequency of a student end in a video playing process according to the state monitoring result of the reconstructed video course, carrying out pause point positioning on the reconstructed video based on the pause frequency, and acquiring a primary knowledge point sequence corresponding to a pause point according to the pause point positioning result;
according to the state monitoring result of the reconstructed video course, acquiring playback frequency of a student end in a video playing process, carrying out playback point positioning on the reconstructed video based on the playback frequency, and acquiring a secondary knowledge point sequence corresponding to the playback point according to the conference discharge positioning result;
acquiring post-class comments of the reconstructed video courses, performing frequency analysis on knowledge point keywords in the post-class comments, and acquiring a three-level knowledge point sequence based on frequency analysis results of the knowledge point keywords;
acquiring primary knowledge point sequences and secondary knowledge point sequences corresponding to all student terminals based on a preset cloud platform;
performing frequency analysis on the primary knowledge point sequence, the secondary knowledge point sequence and the tertiary knowledge point sequence corresponding to all student terminals to obtain frequency analysis results of the knowledge point sequences, and performing grade division on difficulty grades of the knowledge points according to the frequency analysis results of the knowledge points; wherein the difficulty level of the knowledge point comprises: the first-level difficulty knowledge points, the second-level difficulty knowledge points and the third-level difficulty knowledge points;
dynamically setting a corresponding difficulty review plan according to the difficulty grade division result, and synchronizing the difficulty review plan to a corresponding teacher port;
in one practical scenario: the teacher end obtains the distribution situation of difficulty knowledge points unilaterally by checking course comments below the video port, and the method is passive and incomplete, and depends on the teacher end excessively, so that the interactivity of online education is weak;
when the method is implemented, when a student end manually pauses and plays back a course video, a server end analyzes the frequency of pausing and playing back to obtain a corresponding knowledge point list, and carries out summary analysis through a big data processing platform by combining comment contents below the video to obtain a corresponding difficulty knowledge point, and synchronizes the corresponding difficulty knowledge point to a teacher port, and the teacher port can summarize and review the corresponding knowledge point;
the beneficial effects of the above technical scheme are: the invention is beneficial to improving the classification summary of the difficulty knowledge points by collecting the playback frequency and the pause frequency, and is beneficial to acquiring the distribution situation of the difficulty knowledge points of all trainees by acquiring the corresponding difficulty knowledge points through the big data processing platform and also beneficial to more pertinently arranging review for the difficulty knowledge points by the teacher end.
Example 9:
in one embodiment, a method for intelligently processing an online teaching video further comprises:
collecting lecture video data of a lecture teacher in real time through a binocular vision camera, and carrying out video frame splicing on the video data of multiple angles to obtain a panoramic lecture video;
obtaining left and right eye parallax data by depth calculation aiming at the panoramic lecture video;
acquiring panoramic video data in a three-dimensional scene through VR display equipment based on the left-eye and right-eye parallax data;
performing delay measurement according to panoramic video data in the three-dimensional scene to obtain delay time, and judging that the delay time exceeds the standard when the delay time is larger than a preset time range;
when the delay time exceeds the standard, starting delay control for an encoder;
in one practical scenario: when online learning is carried out, a target video course is basically checked through an intelligent device end, the video checking mode is single, and interactivity does not exist;
in the implementation of the invention, when a teacher end gives a lesson, a binocular vision camera is adopted to collect a lecture video in all directions, and then depth calculation is carried out on the collected data to obtain parallax data of left and right eyes;
the beneficial effects of the above technical scheme are: according to the invention, the VR video viewing mode is set, so that the interactivity and interestingness in the online learning process are increased, the lesson experience is increased, in addition, the time delay is easy to occur when the lesson is carried out in the VR mode, when the time delay exceeds the standard, the time delay control is started aiming at the encoder, so that the adverse effect caused by the time delay is reduced, and the lesson efficiency is improved.
Example 10:
in one embodiment, an intelligent processing system for online teaching video, as shown in fig. 3, comprises:
course verification module: the system comprises a cloud course library, a client end identity verification code and a target video course, wherein the cloud course library is used for acquiring student end identity verification codes, acquiring target course codes according to the identity verification codes, and calling corresponding target video courses according to the target course codes on the basis of the cloud course library;
a video compression module: the video frame type analysis is carried out according to the target video course, the video frame type of each frame is determined, and lossless compression is carried out on the target video course according to the video frame type of each frame;
a state detection module: the system is used for carrying out video reconstruction on lossless compression data to obtain reconstructed video courses, and carrying out state monitoring on the reconstructed video courses to obtain state detection results;
in one practical scenario: when online education is carried out, a teacher end sends all online education videos to a server end, when a student end needs to search for a corresponding video course, the student end sends a course service request, the server end analyzes according to the received service request, and sends a target video course to a student end for the student to carry out online learning, data are directly transmitted in such a mode, so that video data transmission delay is easily caused, when the video data volume is large, data transmission is slow, data loss is easily caused, and image quality damage and resolution reduction are easily caused by directly transmitting the data;
when the method is implemented, firstly, a target course code is directly obtained according to login information of a student end, then a target video course is obtained according to click link of the student end, firstly, the video course is subjected to lossless compression in the transmission process of the target video course, data subjected to lossless compression are sent to the student end through a server, the student end carries out video reconstruction after receiving the lossless compression data, when the student end views the reconstructed video course, the viewing state is monitored in real time, and the monitored data are recorded;
the beneficial effects of the above technical scheme are: according to the invention, the course verification module is arranged, the target video course is obtained according to the login information of the student end, the teaching video can be directly obtained through the login information of the student end, the judgment time of the system for purchasing different courses by different students is reduced, the target video course is uploaded to the server in a lossless compression mode through the video compression module, the data after lossless compression is directly downloaded by the student end, the memory of the data is reduced, the data transmission time is reduced, the data transmission efficiency is improved, the state of the video reconstruction course checked by the student end is monitored through the state detection module, the important and difficult points of the students for the online teaching video can be rapidly obtained, and the important and difficult points of the teacher end for reviewing the important and difficult points can be improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. An intelligent processing method for online teaching videos is characterized by comprising the following steps:
acquiring a student identity verification code, acquiring a target course code according to the identity verification code, and calling a corresponding target video course according to the target course code based on a preset cloud course library;
performing video frame type analysis according to the target video course, determining the video frame type of each frame, and performing lossless compression processing on the target video course according to the video frame type of each frame;
the student side carries out video reconstruction according to the lossless compression data to obtain reconstructed video courses, and carries out state monitoring on the viewing process of the reconstructed video courses to obtain state monitoring results;
the method further comprises the following steps:
acquiring pause frequency in the process of playing videos by a student end according to the state monitoring result of the reconstructed video course, positioning pause points for the reconstructed videos based on the pause frequency, and acquiring a primary knowledge point sequence corresponding to the pause points according to the positioning result of the pause points;
acquiring playback frequency of a student end in a video playing process according to the state monitoring result of the reconstructed video course, carrying out playback point positioning on the reconstructed video based on the playback frequency, and acquiring a secondary knowledge point sequence corresponding to the playback point according to the playback point positioning result;
acquiring post-class comments of the reconstructed video courses, performing frequency analysis on keywords of knowledge points in the post-class comments, and acquiring a three-level knowledge point sequence based on frequency analysis results of the keywords of the knowledge points;
acquiring primary knowledge point sequences and secondary knowledge point sequences corresponding to all student terminals based on a preset cloud platform;
performing frequency analysis on the primary knowledge point sequence, the secondary knowledge point sequence and the tertiary knowledge point sequence corresponding to all student terminals to obtain frequency analysis results of the knowledge point sequences, and performing grade division on difficulty grades of the knowledge points according to the knowledge point frequency analysis results; wherein the difficulty level of the knowledge point comprises: the first-level difficulty knowledge points, the second-level difficulty knowledge points and the third-level difficulty knowledge points;
and dynamically setting a corresponding difficulty review plan according to the difficulty grade division result, and synchronizing the difficulty review plan to a corresponding teacher port.
2. The method as claimed in claim 1, wherein the obtaining of the student authentication code, obtaining of the target lesson code according to the authentication code, and calling the corresponding target video lesson according to the target lesson code based on a preset cloud lesson library comprises:
acquiring student side login information, and acquiring course subscription information of a client based on a preset client subscription database according to the login information; wherein the content of the first and second substances,
the course subscription information includes: subscribed course codes, subscribed course names, course hours, course time and course teachers;
automatically generating a course selection list according to the course subscription information of the client, and acquiring a target course selection item of the client according to the course selection list;
and acquiring a corresponding course selection link according to the target course selection item of the client, searching a target course based on the cloud course library according to the course selection link, and determining a target video course.
3. The method as claimed in claim 1, wherein said analyzing the video frame type according to the target video lesson, determining the video frame type of each frame, and performing lossless compression processing for the target video lesson according to the video frame type of each frame comprises:
performing video segmentation according to the target video course to obtain a plurality of frames of image data, and performing down-sampling processing on each frame of image data to obtain primary image data;
acquiring reconstructed boundary pixels according to the number of the primary images, performing intra-frame prediction according to the reconstructed boundary pixels, determining spatial redundancy information in the primary image data, performing inter-frame prediction based on the spatial redundancy information, determining temporal redundancy information in the primary image data, and acquiring secondary image data according to the spatial redundancy information and the temporal redundancy information;
performing discrete cosine transform on the secondary image data, determining a signal concentration area corresponding to the secondary image data, and performing secondary discrete cosine transform according to the signal concentration area to determine primary compressed image data;
performing quantization processing on the primary compressed image data, determining unnecessary data, discarding the unnecessary data, and determining secondary compressed data; wherein the non-essential data comprises: repeating data and floating point data;
and carrying out data processing on the second-level compressed data through entropy coding to obtain third-level compressed data.
4. The intelligent processing method for online teaching video as claimed in claim 1, wherein the student side performs video reconstruction according to lossless compression data, acquires reconstructed video lessons, and performs status monitoring on the viewing process of the reconstructed video lessons, and acquires status monitoring results, including:
performing video segmentation on the lossless compressed data to obtain each frame of compressed image data;
acquiring adjacent frame compressed image data according to each frame of compressed image data, and performing motion tracking according to pixel points in the adjacent frame compressed image data to acquire corresponding pixel point motion tracks;
establishing a model aiming at the motion trail of the pixel points corresponding to the compressed image data to generate a video reconstruction model;
performing motion compensation calculation according to the video reconstruction model to obtain a motion compensation error, correcting the motion compensation error, and performing update processing on the corrected video reconstruction model, wherein the update processing is correction update on the video reconstruction model;
according to the updated video reconstruction model, carrying out reconstruction processing on the lossless compression data to obtain a reconstructed video course;
and based on the reconstructed video course, monitoring the checking process of the student end in real time, and recording monitoring data.
5. The intelligent processing method for online teaching videos as claimed in claim 1, wherein the student side performs video reconstruction according to the lossless compression data, acquires a reconstructed video course, performs status monitoring for the viewing process of the reconstructed video course, and acquires a status monitoring result, further comprising:
performing noise analysis according to the lossless compression data, acquiring quantization noise data corresponding to the lossless compression data, distinguishing the quantization noise data, and confirming a distinguishing result; wherein the quantization noise can be divided into: transform domain quantization noise, spatial domain quantization noise;
when the distinguishing result of the quantization noise is space domain quantization noise, converting the space domain quantization noise to form transform domain quantization noise;
carrying out noise separation on the transform domain quantization noise according to a preset frequency level, and determining a frequency noise separation result; wherein the preset frequency level comprises: high frequency, medium frequency, low frequency;
and converting the transform domain quantization noise into the original space domain quantization noise through inverse conversion for denoising based on the noise separation result.
6. The method as claimed in claim 1, wherein the student end performs video reconstruction according to the lossless compression data to obtain reconstructed video lessons, performs status monitoring on the viewing process of the reconstructed video lessons to obtain status monitoring results, and further comprising:
acquiring state monitoring data, and sending the state monitoring data to a preset big data processing platform to acquire abnormal monitoring data;
extracting fault characteristics aiming at the abnormal monitoring data to obtain fault image characteristics, classifying according to the fault image characteristics to obtain classified fault characteristics; wherein the fault comprises: network connection failure and video reconstruction failure; the video reconstruction failure comprises: the video frame is damaged, and the video and the sound picture are not synchronous;
and extracting and learning the fault image characteristics by adopting a convolutional neural network according to the classified fault characteristics, and determining the video fault reason.
7. The intelligent processing method for online teaching video as claimed in claim 6, wherein the extracting and learning of the fault image features by using a convolutional neural network according to the classified fault features to determine the video fault cause further comprises:
when the video fault reason shows that the course video is a deformation problem, carrying out edge monitoring on the course video data to obtain boundary information of corresponding display equipment;
analyzing the spatial consistency of each frame of data of the course video to obtain an analysis result;
when the analysis result shows that the space consistency meets the requirement, matching a starting frame and an ending frame of the course video by adopting feature transformation to obtain a first processing video;
and linearly aligning the first processing video and the course video according to time and space to obtain a second processing video.
8. The intelligent processing method for online education video according to claim 1, further comprising:
collecting lecture video data of a lecture teacher in real time through a binocular vision camera, and carrying out video frame splicing on the video data of multiple angles to obtain a panoramic lecture video;
obtaining left and right eye parallax data by depth calculation aiming at the panoramic lecture video;
acquiring panoramic video data in a three-dimensional scene through VR display equipment based on the left-eye and right-eye parallax data;
performing delay measurement according to panoramic video data in the three-dimensional scene to obtain delay time, and judging that the delay time exceeds the standard when the delay time is larger than a preset time range;
and when the delay time exceeds the standard, starting delay control for the encoder.
9. An intelligent processing system for online teaching video, comprising:
course verification module: the system comprises a cloud course library, a client end identity verification code and a target video course, wherein the cloud course library is used for acquiring student end identity verification codes, acquiring target course codes according to the identity verification codes, and calling corresponding target video courses according to the target course codes on the basis of the cloud course library;
a video compression module: the video frame type analysis is carried out according to the target video course, the video frame type of each frame is determined, and lossless compression processing is carried out on the target video course according to the video frame type of each frame;
a state detection module: the system is used for reconstructing videos aiming at lossless compression data, acquiring reconstructed video courses, monitoring the states of the reconstructed video courses and acquiring state monitoring results;
the system further comprises:
the knowledge point is directed to a processing module, which is configured to include:
acquiring pause frequency in the process of playing videos by a student end according to the state monitoring result of the reconstructed video course, positioning pause points for the reconstructed videos based on the pause frequency, and acquiring a primary knowledge point sequence corresponding to the pause points according to the positioning result of the pause points;
according to the state monitoring result of the reconstructed video course, acquiring playback frequency of a student end in a video playing process, carrying out playback point positioning on the reconstructed video based on the playback frequency, and acquiring a secondary knowledge point sequence corresponding to the playback point according to the conference discharge positioning result;
acquiring post-class comments of the reconstructed video courses, performing frequency analysis on knowledge point keywords in the post-class comments, and acquiring a three-level knowledge point sequence based on frequency analysis results of the knowledge point keywords;
acquiring primary knowledge point sequences and secondary knowledge point sequences corresponding to all student terminals based on a preset cloud platform;
performing frequency analysis on the primary knowledge point sequence, the secondary knowledge point sequence and the tertiary knowledge point sequence corresponding to all student terminals to obtain frequency analysis results of the knowledge point sequences, and performing grade division on difficulty grades of the knowledge points according to the frequency analysis results of the knowledge points; wherein the difficulty level of the knowledge point comprises: the first-level difficulty knowledge points, the second-level difficulty knowledge points and the third-level difficulty knowledge points;
and dynamically setting a corresponding difficulty review plan according to the difficulty grade division result, and synchronizing the difficulty review plan to a corresponding teacher port.
CN202210142089.XA 2022-02-16 2022-02-16 Intelligent processing method and system for online teaching video Active CN114531604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210142089.XA CN114531604B (en) 2022-02-16 2022-02-16 Intelligent processing method and system for online teaching video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210142089.XA CN114531604B (en) 2022-02-16 2022-02-16 Intelligent processing method and system for online teaching video

Publications (2)

Publication Number Publication Date
CN114531604A CN114531604A (en) 2022-05-24
CN114531604B true CN114531604B (en) 2023-02-28

Family

ID=81622391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210142089.XA Active CN114531604B (en) 2022-02-16 2022-02-16 Intelligent processing method and system for online teaching video

Country Status (1)

Country Link
CN (1) CN114531604B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577165B2 (en) * 2008-06-30 2013-11-05 Samsung Electronics Co., Ltd. Method and apparatus for bandwidth-reduced image encoding and decoding
US10395546B2 (en) * 2016-09-22 2019-08-27 Coursera, Inc. Adaptive content delivery for online education
CN106803379A (en) * 2017-04-07 2017-06-06 福建云课堂教育科技有限公司 A kind of system and method for self-service selection online course
CN112188202A (en) * 2019-07-01 2021-01-05 西安电子科技大学 Self-learning video coding and decoding technology based on neural network
CN111182307A (en) * 2019-12-27 2020-05-19 广东德融汇科技有限公司 Ultralow code stream lossless compression method based on video images for K12 education stage
CN111461638A (en) * 2020-02-28 2020-07-28 山东爱城市网信息技术有限公司 Online education management method, equipment and medium based on block chain
CN111669589B (en) * 2020-06-23 2021-03-16 腾讯科技(深圳)有限公司 Image encoding method, image encoding device, computer device, and storage medium
CN112507792B (en) * 2020-11-04 2024-01-23 华中师范大学 Online video key frame positioning method, positioning system, equipment and storage medium

Also Published As

Publication number Publication date
CN114531604A (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN111355950A (en) Video transmission quality detection method and system in real-time video communication
CN110798690B (en) Video decoding method, and method, device and equipment for training loop filtering model
CN110944200B (en) Method for evaluating immersive video transcoding scheme
CN103369349A (en) Digital video quality control method and device thereof
CN104661021A (en) Quality assessment method and device for video streaming
CN116233445B (en) Video encoding and decoding processing method and device, computer equipment and storage medium
CN113313683B (en) Non-reference video quality evaluation method based on meta-migration learning
CN114531604B (en) Intelligent processing method and system for online teaching video
JP2024511103A (en) Method and apparatus for evaluating the quality of an image or video based on approximate values, method and apparatus for training a first model, electronic equipment, storage medium, and computer program
CN117152815A (en) Student activity accompanying data analysis method, device and equipment
Cemiloglu et al. Blind video quality assessment via spatiotemporal statistical analysis of adaptive cube size 3D‐DCT coefficients
CN109214700A (en) A kind of student ability analysis method based on advanced formula ability axis
CN113538324A (en) Evaluation method, model training method, device, medium, and electronic apparatus
KR20220056599A (en) Learning-data generation method and apparatus for improving cctv image quality
CN110163043B (en) Face detection method, device, storage medium and electronic device
CN111818338A (en) Abnormal display detection method, device, equipment and medium
KR102540817B1 (en) Real-time evaluation method, apparatus and program of video broadcasting quality based on machime leaning
CN113784131B (en) Video coding automation auxiliary system and method
Kim et al. No‐reference quality assessment of dynamic sports videos based on a spatiotemporal motion model
CN111327943A (en) Information management method, device, system, computer equipment and storage medium
Akoa et al. Using artificial neural network for automatic assessment of video sequences
Zhen et al. A deep learning based distributed compressive video sensing reconstruction algorithm for small reconnaissance uav
Leszczuk et al. Quality assessment in video surveillance
CN111246125B (en) Multi-channel video stream synthesis method and device
CN113573043B (en) Video noise point identification method, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant