WO2024087208A1 - Procédé et système de lecture de vidéo, et support de stockage - Google Patents

Procédé et système de lecture de vidéo, et support de stockage Download PDF

Info

Publication number
WO2024087208A1
WO2024087208A1 PCT/CN2022/128383 CN2022128383W WO2024087208A1 WO 2024087208 A1 WO2024087208 A1 WO 2024087208A1 CN 2022128383 W CN2022128383 W CN 2022128383W WO 2024087208 A1 WO2024087208 A1 WO 2024087208A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
information
video data
video frame
server
Prior art date
Application number
PCT/CN2022/128383
Other languages
English (en)
Chinese (zh)
Inventor
谭红平
刘文泽
吴杰
Original Assignee
深圳市锐明技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市锐明技术股份有限公司 filed Critical 深圳市锐明技术股份有限公司
Priority to CN202280063551.5A priority Critical patent/CN118120238A/zh
Priority to PCT/CN2022/128383 priority patent/WO2024087208A1/fr
Publication of WO2024087208A1 publication Critical patent/WO2024087208A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption

Definitions

  • the present application relates to the field of video, and in particular to a video playback method, system and storage medium.
  • the server encrypts the audio and video data
  • the client decrypts and plays the audio and video data (if the client is a browser, you also need to install a plug-in for the browser, such as the Active plug-in).
  • the main process is as follows:
  • the user starts the client for playing the video
  • the client queries the server for decryption information of the played video
  • the server verifies the legitimacy of the client's access and returns the decrypted information to the client after the verification passes;
  • the client pulls the video stream from the server, decrypts the encrypted audio and video data through the decryption information, and then decodes and plays the audio and video data.
  • the embodiments of the present application provide a video playback method, system and storage medium to solve the problem in the prior art that when playing a video, the client needs to query the server for decryption information, which causes great access pressure on the server, affects the concurrency of the platform, causes delays in the video display time, and brings a bad user experience for low-latency video playback.
  • a first aspect of an embodiment of the present application provides a video playback method, which is applied to a server, and the method includes:
  • the expanded video frame and the encrypted audio frame are encapsulated into second video data, and the second video data is sent to the browser.
  • the second video data is used by the browser to decrypt and play the second video data according to the first decryption information in the expanded video frame structure.
  • the method before sending a signaling for opening the video to the device end for capturing the video, the method further includes:
  • the device end is authenticated according to the registration information, and the online status of the device after the authentication is passed is determined.
  • the first video data uploaded by the device is encrypted video data
  • the method before receiving the first video data uploaded by the device, the method further includes:
  • the predetermined first encryption information is sent to the device end, where the first encryption information is used by the device end to encrypt the third video data collected by the device end.
  • adding first decryption information to the expanded video frame structure includes:
  • the encrypted first encryption information is added to the network abstraction layer unit in the expanded video frame structure.
  • an embodiment of the present application provides a video playback method, which is applied to a browser side, and the method includes:
  • Receive second video data returned by the server where the second video data includes video frames and audio frames in the first video data sent by the device, and the video frames are video frames to which predetermined first decryption information is added after structural expansion, and the audio frames are audio frames encrypted by first encryption information corresponding to the first decryption information;
  • the encrypted audio frame is decrypted according to the first decryption information in the second video data, and the video is played according to the video frame and the decrypted audio frame.
  • decrypting the encrypted audio frame according to the first decryption information in the second video data includes:
  • the encrypted audio frame is decrypted according to the first decryption information.
  • the method further includes:
  • the video frame When the video frame is in an encrypted state, the video frame is decrypted using the first decryption information.
  • parsing the video frame having the extended video frame structure to obtain the first decryption information included in the video frame includes:
  • the video frame having the extended video frame structure is parsed, and the extended information of the video frame is decrypted by using predetermined second decryption information to obtain the first decryption information included in the video frame.
  • a third aspect of an embodiment of the present application provides a server, including a memory, a processor, a communication unit, and a computer program stored in the memory and executable on the processor, wherein:
  • the communication unit is used to send and receive data or instructions to the browser or device;
  • the processor is used to execute the video playback method as described in any one of the first aspects.
  • a fourth aspect of an embodiment of the present application provides a browser end, comprising a memory, a processor, a communication unit, and a computer program stored in the memory and executable on the processor, wherein the browser end comprises a communication unit and a processing unit, wherein:
  • the communication unit is used to send and receive data or instructions to the server;
  • the processor is used to execute the video playback method as described in any one of the second aspects.
  • a fifth aspect of the embodiment of the present application provides a video playback system, the system comprising a browser side, a server side and a device side, wherein:
  • the server is used to execute the video playback method as described in any one of the first aspects
  • the browser end is used to execute the video playback method as described in any one of the second aspects.
  • a sixth aspect of an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the method described in any one of the first aspect or the second aspect are implemented.
  • the server when the server receives a video playback request from the browser, it sends a signal to the device to open the video, receives the first video data uploaded by the device, expands the video frame structure of the video frame in the first video data, adds the first decryption information to the expanded video frame structure, encrypts the audio frame in the first video data by the first encryption information, encapsulates the expanded video frame and the encrypted audio frame as the second video data and sends it to the browser for playback, so that when the browser plays the video, there is no need to separately call the server's interface to query the first decryption information, which fundamentally reduces one business interaction and helps to reduce the access pressure on the server.
  • the first decryption information is transmitted through the expanded video frame structure, and no additional access time is consumed, thereby reducing the delay of video playback.
  • FIG1 is a schematic diagram of a video playback system according to a method provided by an embodiment of the present application.
  • FIG2 is a schematic diagram of constructing a video playback library on a browser side provided in an embodiment of the present application
  • FIG3 is a schematic diagram of an implementation flow of a video playback method provided in an embodiment of the present application.
  • FIG4 is an extended schematic diagram of a video frame structure provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of a process of implementing decryption and playback on a browser side provided in an embodiment of the present application
  • FIG6 is a schematic diagram of an interactive process of video playback provided in an embodiment of the present application.
  • FIG7 is a schematic diagram of a video playback device provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of an electronic device provided in an embodiment of the present application.
  • Fig. 1 is a schematic diagram of a video playback system of an implementation scenario of a video playback method provided in an embodiment of the present application.
  • the video playback system includes a browser end, a server end, and a device end.
  • the device end is the producer of video content, and the device end may include one or more network cameras, or other video acquisition devices, for collecting third video data.
  • the device end After the device end is started, it can automatically connect to the server end to establish a communication link between the device end and the server end.
  • the device end initiates an authentication request to the server end.
  • the server end After the server end authenticates the device end, the server end can set the authenticated device end to an online state. When the device end is in an online state, the browser end can play the video through the server end.
  • the third video data is collected in real time through the microphone and the image sensor, and the collected third video data is encoded and compressed through a predetermined encoding method, such as H264 or H265 encoding method, and then encrypted through a predetermined first encryption algorithm, such as AES (Advanced Encryption Standard) or RSA (Rivest-Shamir-Adleman).
  • the encrypted video data can be stored in the memory of the device.
  • the encrypted third video data can be uploaded to the server in real time for playback, or the unencrypted third video data can be uploaded to the server in real time for playback.
  • the server is used to coordinate data or signaling between the browser and the device so that the browser can play the third video data collected by the device.
  • the server manages the online and offline status of the device.
  • the latest first encryption information such as a key
  • the server receives a video playback request sent by the user through the browser, it notifies the device in real time to upload the first video data, expands the video frame structure in the uploaded first video data, and adds extended information to the extended video frame structure, including the first decryption information corresponding to the first encryption information.
  • the extended first video data is sent to the browser.
  • the browser side includes a terminal with a browser installed, and can play the video collected by the device side based on the installed browser request.
  • the browser side is the video playback client.
  • the user plays the video, it is responsible for pulling the encrypted second video data from the server, then parsing the second video data, obtaining the first decryption information, and decrypting the second video data, thereby realizing the video playback.
  • the browser side will implement a video playback library (videoSdk), which can be directly called by the web program without installing any plug-ins.
  • videoSdk video playback library
  • the video playback library includes two sub-libraries, namely the decryption library (videoplayer.wasm) in wasm (full name in English: WebAssembly) format, and the playback library (videoplayer.js) in js format.
  • the WebAssembly method can be used to compile programs such as video data processing module, multimedia video processing tool FFmpeg, decryption algorithm (including AES, RSA, etc.) into a video playback sub-library based on wasm format, thereby realizing core business logic such as video data decapsulation, decryption, FFmpeg soft decoding, and FMP4 (Fragmented mp4) file encapsulation.
  • the WebAssembly method can compile C/C++ programs into LLVM (Low Level Virtual Machine in English, Low Level Virtual Machine in Chinese) bytecode, a coding format that can only be understood by computers, with the characteristics of high security and fast running speed.
  • JavaScript language can be used to implement the video streaming module, MSE (Media Source Extensions) playback module, and WebGL (Web Graphics Library) playback module.
  • MSE Media Source Extensions
  • WebGL Web Graphics Library
  • the video streaming module pulls video data from the server and inputs it into videoPlayer.wasm for data processing.
  • the H.264 video frame is directly encapsulated in FMP4 mode and input into the MSE playback module for decoding and rendering.
  • FFmpeg soft decoding is required in videoPlayer.wasm, and then the decoded video data is input into the WebGL playback module for rendering.
  • the browser can decode and play the audio and video data normally, and the user can hear the sound and see the video screen.
  • FIG3 is a schematic diagram of an implementation flow of a video playback method provided in an embodiment of the present application. As shown in FIG3 , the method includes:
  • a video playback request from a browser is received, and a signaling for opening the video is sent to a device for capturing the video.
  • a step of the device registering and going online at the server may also be included.
  • Staff or users can enter device information on the server in advance, including the device serial number, etc.
  • the device When the device is started, it establishes a communication link with the server through the server access information pre-set by the device, such as a TLS (Transport Layer Security Protocol in Chinese, Hyper Text Transfer Protocol over Secure Socket Layer in English) communication link, which ensures the security of the transmission process through transmission encryption and identity authentication based on HTTP.
  • TLS Transport Layer Security Protocol in Chinese, Hyper Text Transfer Protocol over Secure Socket Layer in English
  • the device After the communication link is established, the device sends a registration package to the server, and the server authenticates the device based on the registration package. After the authentication is passed, the server sets the status of the device to the online state.
  • the server may send first encryption information to the device, including key information of encryption algorithms such as AES and RSA.
  • the encryption algorithm is a symmetric encryption algorithm
  • the first encryption information may be encryption parameters.
  • the encryption algorithm is an asymmetric encryption algorithm
  • the first encryption information may be public key information.
  • the same device may include multiple video channels.
  • different first encryption information may be configured for different video channels, including different encryption algorithms or encryption parameters. After receiving the first encryption information, the device may encrypt the corresponding video channel and respond to the server with the result information of whether the setting is successful.
  • the user can send a video playback request to the server through the browser.
  • the video playback request may include device information or video channel information.
  • the media protocol between the browser and the server can use the standard HTTP-FLV (Flash Video Over Http in English, FLASH video based on http in Chinese) transmission protocol.
  • the server After receiving the video playback request from the browser, the server will send a video opening signal through the communication link established between the device and the server (such as a TLS communication link).
  • the signal can carry information such as the channel information of the video to be opened, the server IP address, and the media port.
  • the device After the device receives the video opening signaling sent by the server, it establishes a media link (such as a TLS media link) between the device and the server through the server IP address and media port information carried in the signaling. At the same time, the device also collects the third video data in real time through the microphone and camera, compresses the video data with H264 or H265 encoding, and can also use the first encryption information sent by the server to encrypt the video frames and audio frames in the video data. After completing the encryption process, the device can upload the encrypted first video data through the established TLS media link.
  • a media link such as a TLS media link
  • the first video data uploaded by the receiving device is encrypted by using predetermined first encryption information
  • the video frame structure in the first video data is expanded
  • first decryption information is added to the expanded video frame structure, wherein the first decryption information corresponds to the first encryption information.
  • the media port of the server may expand the video frame structure of the first video data and add the first decryption information to the video frame structure.
  • the first video data may be unpacked to restore the H264/H265 video frame and the original audio frame.
  • the video frame structure can be extended to extend a NALU (Network Abstraction Layer Unit in Chinese, used to encapsulate the data provided by the video coding layer for network transmission) unit including SEI (Supplemental Enhancement Information in Chinese, which is used to provide a method for adding additional information to the video code stream and is a feature of the H264/H265 video compression standard) in front of the I frame of the original video data, and the first encryption information is added to the NALU unit body.
  • NALU Network Abstraction Layer Unit in Chinese, used to encapsulate the data provided by the video coding layer for network transmission
  • SEI Supplemental Enhancement Information in Chinese, which is used to provide a method for adding additional information to the video code stream and is a feature of the H264/H265 video compression standard
  • This method does not destroy the original frame structure of the H264/H265 video, and can use standard streaming media transmission protocols such as HTTP-FLV/HLS (Http Live Streaming in English)/RTMP (Real Time Messaging Protocol in Chinese, which is an open protocol for direct audio, video and data transmission between Flash players and servers) to distribute audio and video data.
  • HTTP-FLV/HLS Http Live Streaming in English
  • RTMP Real Time Messaging Protocol in Chinese, which is an open protocol for direct audio, video and data transmission between Flash players and servers
  • each NALU unit includes a NALU header and a NALU body.
  • SPS Sequence Parameter Set in Chinese, indicating that all information of an image sequence is included
  • PPS Picture Parameter Set in English, indicating that information of all slices of an image is included
  • basic image information are included.
  • supplementary enhancement information SEI i.e., the first encrypted information
  • the extended information i.e., the first decrypted information
  • the extended information can be re-encrypted by the predetermined second encryption information.
  • the browser receives the second video data
  • the re-encrypted first decrypted information can be decrypted based on the predetermined second decryption information to obtain the first decrypted information.
  • the expanded video frame and the encrypted audio frame are encapsulated into second video data, and the second video data is sent to the browser side.
  • the second video data is used by the browser side to decrypt and play the second video data according to the first decryption information in the extended video frame structure.
  • the unpacked video frame can be an unencrypted video frame in H264 format or H265 format, which is expanded through the video frame structure to obtain extended information and the original video frame encrypted by the first encryption information, or it can be an expanded unencrypted video frame, which is encapsulated with the encrypted audio frame as the second video data.
  • the encryption state of the video frame in the second video data may be switched according to the duration of the second video data sent to the browser.
  • the video frame in the transmitted second video data may be an unencrypted video frame, and the first decryption information may be obtained by parsing based on the second decryption information.
  • the first decryption information in the extended information may be directly obtained.
  • the video frames in the second video data are encrypted video frames.
  • the browser can decrypt the video frames and audio frames in the second video data based on the first decryption information obtained within the first predetermined time period to obtain video frames and audio frames for playback.
  • the server can distribute the encrypted audio frames and video frames with extended SEI to the browser through the TLS media link.
  • the playback library of the JS layer on the browser side can pull the audio and video streams, and can decapsulate the audio and video data through the integrated video playback library (videoSdk), and parse the SEI extension information of the video frame I frame to obtain the decryption algorithm and parameters, including the encrypted first decryption information.
  • the encrypted first decryption information can be decrypted according to the preset second decryption information to obtain the first decryption information, including information such as the encryption algorithm type and decryption parameters for decrypting the audio frame, or the audio frame and the video frame.
  • the video playback library (videoSdk) of the WASM layer can use the decryption algorithm type and decryption parameters in the parsed first decryption information to decrypt the audio frame and video frame to obtain the original audio frame data and video frame data. This includes determining whether the video frame is encrypted. If it is encrypted, decrypting it through the first decryption information to obtain a video frame that can be used for encapsulation and playback. If the video frame is an unencrypted video frame, the unencrypted video frame can be used.
  • the video playback library (videoSdk) will encapsulate the original audio and video frames in FMP4 format, and decode and play them through the browser's MSE mode.
  • the video playback library will use WebAssembly soft decoding to decode H.265 into YUV data, and then call the browser's WebGL interface for rendering, so that the browser can display the video screen and play the sound normally.
  • FIG6 is a schematic diagram of an interactive video playback provided by an embodiment of the present application, which is described in detail as follows:
  • Step 1 The user enters the device information on the server.
  • the device When the device is turned on, the device establishes a TLS communication link with the server and sends a registration package to the server.
  • the server will authenticate the device information and put the device online after the authentication is passed.
  • Step 2 The server sends the first encryption information (such as the key of the AES or RSA encryption algorithm) for encrypting the audio and video frames to the device.
  • the first encryption information such as the key of the AES or RSA encryption algorithm
  • different encryption algorithms or encryption parameters can be configured for different channels to further ensure the security of the video content.
  • the device After receiving the new first encryption information sent, the device will use the new first encryption information to encrypt the audio and video frames in the third video data, and respond to the server with the result information of whether the setting is successful.
  • Step 3 After the device is online, the user requests the server to play the video through the browser, which can carry parameters such as device information and channel information.
  • the media protocol between the browser and the server can use the standard HTTP-FLV protocol.
  • Step 4 After the server receives the video playback request sent by the user through the browser, it will send a signal to open the video through the TLS communication link established between the device and the server.
  • the signal will carry information such as channel information, server IP and media port.
  • Step 5 After the device receives the signaling from the server to open the audio and video, it establishes a TLS media link between the device and the server through the server IP and media port information carried in the signaling. At the same time, the device also collects the third video data in real time through the microphone or camera, compresses the third video data with H264 or H265 encoding, and then encrypts the third video data using the first encryption information sent by the server.
  • Step 6 The device uploads the encrypted third video data through the established TLS media link to obtain the first video data.
  • Step 7 After receiving the first video data, the media port of the server will unpack the audio frame and video frame to restore the H264/H265 video frame and the original audio frame.
  • an SEI NALU unit can be extended in front of the original video data as extended information, and the secondary encrypted key information can be added to the NALU unit body.
  • the advantage of this method is that it will not destroy the original frame structure of H264/H265, and can use standard streaming media transmission protocols such as HTTP-FLV/HLS/RTMP for audio and video data distribution.
  • HTTP-FLV/HLS/RTMP for audio and video data distribution.
  • the entire audio data can be directly encrypted.
  • Step 8 The server distributes the encrypted audio frame and the SEI-extended video frame, that is, the second video data, to the browser through the TLS media link.
  • Step 9 The browser decapsulates the second video data through the integrated video playback library videoSdk, obtains the encrypted first decryption information from the SEI extended data of the I frame, and then decrypts the encrypted first decryption information to obtain the first decryption information, that is, the original encryption algorithm type and decryption parameters.
  • Step 10 The video playback library videoSdk uses the algorithm type and decryption parameters of the parsed first decryption information to decrypt the audio frame or video frame of the second video data to obtain the original audio frame data and video frame data;
  • Step 11 For H.264 video frames, the video playback library (videoSdk) will encapsulate the original audio and video frames in FMP4 format, and decode and play them through the browser's MSE mode.
  • the video playback library For H.265 video frames, the video playback library will use WebAssembly soft decoding to decode H.265 into YUV data, and then call the browser's WebGL interface for rendering, so that the browser can display the video and play the sound normally.
  • the decryption algorithm type and decryption parameters of the first decryption information are transmitted through the extended video frame structure, including the extended I frame SEI method.
  • the browser does not need to call the server's interface separately for query, which fundamentally reduces one business interaction.
  • the decryption algorithm type and decryption parameters of the first decryption information are transmitted through the extended video frame structure, such as the extended video I frame SEI.
  • the browser does not need to call the server's interface separately for query, which fundamentally reduces one business interaction and does not consume additional access time.
  • the video playback library (videoSdk) based on the WebAssembly method has the characteristics of fast execution speed and does not cause additional performance loss, so it will not affect the playback experience.
  • TLS can be used between the device and the server for communication and uploading of audio and video data
  • HTTPS can be used between the browser and the server for communication and video data pulling, making network transmission safe and reliable.
  • the first video data uploaded by the device to the server is encrypted video data.
  • the second video data forwarded by the server to the browser is also encrypted video data. Without data decryption, the video data cannot be played even if it is captured.
  • the video playback library (videoSdk) is in the underlying virtual machine bytecode format.
  • the decryption library videoPlayer.wasm performs first decryption information restoration and video decryption playback. It is impossible to obtain the key information and video content in the first decryption information by modifying the web page source code.
  • videoPlayback library (videoSdk), which can be published together with the web program. There is no need to install other plug-ins separately.
  • videoPlayback library (videoSdk)
  • the library needs to be updated, you only need to republish the web program, which will not bring additional work costs for subsequent operation and maintenance.
  • the first encrypted information is transmitted by extending the SEI NALU unit of the I frame in the video frame structure, which will not damage the frame structure.
  • the browser and the server can still use standard streaming protocols such as HTTP-FLV, HLS, RTMP, etc., which has good versatility and scalability.
  • FIG. 7 is a schematic diagram of a video playback device applied to a server provided in an embodiment of the present application. As shown in FIG. 7 , the device includes:
  • the signaling sending unit 701 is used to receive a video playback request from a browser and send a signaling for opening the video to a device for capturing the video.
  • the expansion unit 702 is used to receive the first video data uploaded by the device end, encrypt the audio frame in the first video data by using the predetermined first encryption information, expand the video frame structure in the first video data, and add the first decryption information to the expanded video frame structure, wherein the first decryption information corresponds to the first encryption information.
  • the playback unit 703 is used to encapsulate the extended video frame and the encrypted audio frame into second video data, and send the second video data to the browser side.
  • the second video data is used by the browser side to decrypt and play the second video data according to the first decryption information in the extended video frame structure.
  • the video playing device shown in FIG. 7 corresponds to the video playing method shown in FIG. 3 .
  • the video playback device may also include a browser-based video playback device, including:
  • a request sending unit used to send a video play request to the server, wherein the video play request includes device information of the video requested to be played;
  • a second video data receiving unit is used to receive second video data returned by the server, wherein the second video data includes video frames and audio frames in the first video data sent by the device, and the video frames are video frames with predetermined first decryption information added after structural expansion, and the audio frames are audio frames encrypted by first encryption information corresponding to the first decryption information;
  • a decryption unit is used to decrypt the encrypted audio frame according to the first decryption information in the second video data, and play the video according to the video frame and the decrypted audio frame.
  • the video playback device based on the browser side corresponds to the video playback device based on the device side.
  • FIG8 is a schematic diagram of an electronic device provided in an embodiment of the present application.
  • the electronic device can be a browser side or a server side.
  • the communication unit is used to send and receive data and signaling, including the sending and receiving of data and signaling between the browser side and the server side, or between the server side and the device side.
  • the electronic device 8 of this embodiment includes: a processor 80, a communication unit and a memory 81, and a computer program 82 stored in the memory 81 and executable on the processor 80, such as a video playback program.
  • the processor 80 executes the computer program 82, the steps in each of the above-mentioned video playback method embodiments are implemented.
  • the processor 80 executes the computer program 82, the functions of each module/unit in the above-mentioned device embodiments are implemented.
  • the computer program 82 may be divided into one or more modules/units, which are stored in the memory 81 and executed by the processor 80 to complete the present application.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, which are used to describe the execution process of the computer program 82 in the electronic device 8.
  • the electronic device may include, but is not limited to, a processor 80 and a memory 81.
  • FIG8 is merely an example of the electronic device 8 and does not limit the electronic device 8.
  • the electronic device may include more or fewer components than shown in the figure, or may combine certain components, or different components.
  • the electronic device may also include an input/output device, a network access device, a bus, etc.
  • the processor 80 may be a central processing unit (CPU), other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the memory 81 may be an internal storage unit of the electronic device 8, such as a hard disk or memory of the electronic device 8.
  • the memory 81 may also be an external storage device of the electronic device 8, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash card (Flash Card), etc. equipped on the electronic device 8.
  • the memory 81 may also include both an internal storage unit of the electronic device 8 and an external storage device.
  • the memory 81 is used to store the computer program and other programs and data required by the electronic device.
  • the memory 81 may also be used to temporarily store data that has been output or is to be output.
  • the technicians in the relevant field can clearly understand that for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example for illustration.
  • the above-mentioned function allocation can be completed by different functional units and modules as needed, that is, the internal structure of the device can be divided into different functional units or modules to complete all or part of the functions described above.
  • the functional units and modules in the embodiment can be integrated in a processing unit, or each unit can exist physically separately, or two or more units can be integrated in one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of software functional units.
  • the disclosed devices/terminal equipment and methods can be implemented in other ways.
  • the device/terminal equipment embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the present application implements all or part of the processes in the above-mentioned embodiment method, and can also be completed by hardware related to computer program instructions.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program is executed by the processor, the steps of the above-mentioned method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code can be in source code form, object code form, executable file or some intermediate form.
  • the computer-readable medium may include: any entity or device that can carry the computer program code, recording medium, U disk, mobile hard disk, disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electric carrier signal telecommunication signal and software distribution medium.
  • the content contained in the computer-readable medium can be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction.
  • computer-readable media do not include electric carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

La présente demande se rapporte au domaine des vidéos, et concerne un procédé et un système de lecture vidéo, et un support de stockage. Le procédé consiste à : recevoir une demande de lecture vidéo d'une extrémité de navigateur, et envoyer, à une extrémité de dispositif à des fins de capture d'une vidéo, une signalisation pour ouvrir la vidéo ; recevoir des premières données vidéo téléchargées par l'extrémité de dispositif, chiffrer une trame audio dans les premières données vidéo au moyen de premières informations de chiffrement prédéterminées, étendre une structure de trame vidéo dans les premières données vidéo, et ajouter des premières informations de déchiffrement à la structure de trame vidéo étendue ; et envoyer une trame vidéo étendue et la trame audio chiffrée à l'extrémité de navigateur, de telle sorte que l'extrémité de navigateur effectue un déchiffrement et une lecture en fonction des premières informations de déchiffrement dans la structure de trame vidéo étendue. Selon la présente demande, une structure de trame vidéo est étendue, ce qui permet d'éviter le besoin d'appeler séparément une interface d'une extrémité de service pour interroger des premières informations de déchiffrement, et de faciliter la réduction de la pression d'accès sur l'extrémité de service et la réduction d'un retard de lecture vidéo.
PCT/CN2022/128383 2022-10-28 2022-10-28 Procédé et système de lecture de vidéo, et support de stockage WO2024087208A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280063551.5A CN118120238A (zh) 2022-10-28 2022-10-28 视频播放方法、***及存储介质
PCT/CN2022/128383 WO2024087208A1 (fr) 2022-10-28 2022-10-28 Procédé et système de lecture de vidéo, et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/128383 WO2024087208A1 (fr) 2022-10-28 2022-10-28 Procédé et système de lecture de vidéo, et support de stockage

Publications (1)

Publication Number Publication Date
WO2024087208A1 true WO2024087208A1 (fr) 2024-05-02

Family

ID=90829780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/128383 WO2024087208A1 (fr) 2022-10-28 2022-10-28 Procédé et système de lecture de vidéo, et support de stockage

Country Status (2)

Country Link
CN (1) CN118120238A (fr)
WO (1) WO2024087208A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704545A (zh) * 2016-01-20 2016-06-22 中国科学院信息工程研究所 一种基于h.264视频流的密钥同步信息传输方法
US20210112288A1 (en) * 2018-05-28 2021-04-15 Alibaba Group Holding Limited Network live-broadcasting method and apparatus
CN112822518A (zh) * 2021-04-19 2021-05-18 浙江华创视讯科技有限公司 视频播放方法、装置、***、电子设备和存储介质
CN114189713A (zh) * 2021-12-21 2022-03-15 杭州当虹科技股份有限公司 一种内容加密的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704545A (zh) * 2016-01-20 2016-06-22 中国科学院信息工程研究所 一种基于h.264视频流的密钥同步信息传输方法
US20210112288A1 (en) * 2018-05-28 2021-04-15 Alibaba Group Holding Limited Network live-broadcasting method and apparatus
CN112822518A (zh) * 2021-04-19 2021-05-18 浙江华创视讯科技有限公司 视频播放方法、装置、***、电子设备和存储介质
CN114189713A (zh) * 2021-12-21 2022-03-15 杭州当虹科技股份有限公司 一种内容加密的方法

Also Published As

Publication number Publication date
CN118120238A (zh) 2024-05-31

Similar Documents

Publication Publication Date Title
US9038147B2 (en) Progressive download or streaming of digital media securely through a localized container and communication protocol proxy
US8452008B2 (en) Content distributing method, apparatus and system
CN112822518A (zh) 视频播放方法、装置、***、电子设备和存储介质
WO2021072878A1 (fr) Procédé et appareil de chiffrement et de déchiffrement de données audio/vidéo utilisant rtmp, et support de stockage lisible
US7249264B2 (en) Secure IP based streaming in a format independent manner
CN106331853B (zh) 多媒体解封装方法及装置
US11457254B2 (en) Systems and methods for secure communications between media devices
US9485533B2 (en) Systems and methods for assembling and extracting command and control data
CN110061962B (zh) 一种视频流数据传输的方法和装置
CN103004219A (zh) 用于防止传送的视频数据的篡改的***和方法
CN110012260A (zh) 一种视频会议内容保护方法、装置、设备及***
CN110611830A (zh) 一种视频处理方法、装置、设备及介质
KR101815467B1 (ko) 보안 에이전트를 이용한 보안 감시 강화 시스템
US20080037782A1 (en) Reduction of channel change time for digital media devices using key management and virtual smart cards
WO2020073777A1 (fr) Procédé et appareil de traitement multimédia
WO2024087208A1 (fr) Procédé et système de lecture de vidéo, et support de stockage
EP3169076A1 (fr) Dispositif portable de traitement de contenu multimédia à accès contrôlé
US10231004B2 (en) Network recording service
JP2004186812A (ja) Av通信制御集積回路及びav通信制御プログラム
WO2017035018A1 (fr) Procédé et système de chiffrement, de transmission et de déchiffrement efficients de données vidéo
WO2017035784A1 (fr) Procédé de blocage du lien dynamique d'une url et système anti-lien dynamique
CN117395466B (zh) 视频传输的实时监控方法、***及电子设备
CN108400987A (zh) 一种音频播放中的地址保护策略
Wang et al. IPTV Video Hardware Encryption Transmission System Analysis
CN115695858A (zh) 基于sei加密的虚拟制片视频母片编解码***、方法及平台

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22963189

Country of ref document: EP

Kind code of ref document: A1